World's most popular travel blog for travel bloggers.

[Solved]: How does CPU actually retrieve data from memory when you call a variable in a programming language?

, , No Comments
Problem Detail: 

As I have understood from all the internet sources I can get to, when you declare and initialize a variable in java, you are allocating this data, say an 8-byte float, in a particular memory cell in the RAM, with the address specified as the row and column number. Now, suppose I want to access this memory location and print the float out. As a programmer, I will clearly not write the binary representation of the memory location of this float for the CPU to process; rather, I will call the variable name I declared before. When this variable name got transformed into binary code and sent to the CPU, how does CPU know what memory location this variable is referencing to? I mean, for that to work, doesn't the memory location of the variable has to be stored directly on the CPU? Because if not, you will have to store the the memory location of the float at some memory cell as well; in other words, when you call the variable, you have to first get from the RAM what memory address is the variable representing and then use that information to go to the RAM to retrieve the float. However, for the CPU to get to the memory address of the memory address of the memory address of the float, it still either has to store it directly on the CPU or has to store it in the RAM, and the same story goes one recursively. What is really happening when you need to call a variable? How to the CPU know the memory address associated with the variable in the first place without referring to the RAM? Because if the CPU is referring to the RAM, then that means it has already gotten the memory address f

Asked By : Kun

Answered By : Mr Tsjolder

Your assumption that the variable is sent to the CPU is wrong. Every program must be compiled to some machine code (or it will be interpreted, but it roughly comes down to the same thing). The idea of compilation is to translate the human-readable code to a series of bits, which can then be read by the processor. In order to get an idea of what those series of bits do, you could also take a look at assembly which is actually not much more than an encoding of the instruction set of a CPU.

Now the trick is that the compiler builds a so-called symbol table where it keeps track of all variables that have been defined by the programmer(s). Once it knows about all of the variables, it also knows how much RAM it needs (not considering dynamic allocation of course) and thus can allocate a certain space in memory. All variables are then mapped to some piece of memory or if the compiler is smart enough, it might even map certain (temporary) variables only to registers on the CPU to save some memory (not completely sure about that last thing though). The compiler thus creates a series of bits where each variable is represented by the correct register- or memory-address and the cpu does not bother/know about any variables.

For dynamically allocated memory things get a bit trickier, but eventually it comes down to memory addresses being stored in variables which are handled by the compiler again.

I hope I understood your question correctly and did not spread too much nonsense (if I did, please correct me) as I am not an authority whatsoever in this region, but that is roughly my view on these things.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/55714

0 comments:

Post a Comment

Let us know your responses and feedback