5 min read
TILTW - Week 03

Yo guys! How’s it going!? This was a nice and slow week. Started off slow and advanced rapidly, but here we are, worked through the week and enjoyed the weekend.

Since I am learning about low level stuff and how code works behind the scenes, I thought about learning about CPU and what actually happens. I was sitting with my friend in the class thinking about why I should learn Assembly, to which my friend replied you should learn x86 Assembly and I was like wait so there’s like types in Assembly (as I felt dumb in that moment). So, I pulled up my laptop and started researching. Then, I came across the ARM, x86, x86_64 architectures and then I realized this was exactly what I was searching for.

ARM (Advanced RISC Machine) is an architecture used for handling simpler instructions meaning it uses a smaller set of simpler instructions. Since it has smaller sets and simpler instructions, the energy it consumes it absolutely lower than the other architectures. Due to its lesser power consumption, it is mostly used in smaller devices with low power capacity and simpler devices. Although it seems like “man! this looks so simple… which device would actually use it!?”, you will be shocked when you know that this architecture is used in Apple’s M series chips. YES! I was shocked as well, about how can this simple architecture handle such complex tasks but Apple engineers have made it happen. Apart from this, it is mostly used in mobile phones, tablets, smart-watches, embedding systems like raspberry pi, servers and gaming consoles (nintendo). So, ARM is actually useful despite it looking so simple.

Next was the x86 architecture which is capable of handling complex tasks. It was developed by Intel and it is a Complex Instruction Set Computing (CISC) architecture. It is obviously less power efficient than ARM architecture, since it handles complex instructions. And it is mostly used in laptops and desktops. Apple chips (pre M1) used this architecture.

Finally, we have the x86_64 architecture. It is an extension of the x86 architecture supporting the 64 bit computing. It supports more complex computing than the normal x86 architecture. It consumes more power, optimized for higher performance, and can access larger amounts of memory than the above architectures. Almost all the modern laptops and computers use this architecture now since it supports 64 bit computing and can access >4GB RAM in the system. It actually has a lot of applications and it is tiresome to write all of them here.

Phew… It was a lot of info on chips and architectures but as soon as I completed reading it, one more term caught my eye, “64 bit”. And I wondered what exactly is this 32 bits and 64 bits. I pulled up my laptop back and started again. Basically, it represents the data handling and data addressing capability of the processor. So, a 32 bit processor can handle data in chunks that are 32 bits (4 bytes) wide. To put it simply, it can handle or transfer 32 bits (4 bytes) of data simultaneously during each clock cycle. (Clock cycle is the basic unit of time in CPU). This was the data handling part. For the data addressing part, a 32-bit processor can address up to 2³² different memory locations, which equals 4 GB of RAM. This is the maximum amount of memory a 32-bit processor can use directly. FYI, 2³² = 4,294,967,296 unique addresses and 2⁶⁴ = 18,446,744,073,709,551,616. Now, that’s crazy, like the number of addresses it has access to too huge. No wonder the computation is too fast.

This fueled my enthusiasm to learn more about low level and computers. Later in the week, I learnt more about pointers and programming in CPP (I’m learning it now btw) and setup my NVim. Configuring my own nvim was fun. It was for the first time that I setup my own nvim and I am writing this blog in nvim itself. I need to get more used to with the keybinds. I believe if I use it more, I can actually work faster than I used to.

Well, this was it for the week. In the next week, I am planning to learn more about GPU architecture and programming since I am soon planning to take a class on the same. I think it will be fun to learn more new things and it will answer my question “how the hell is GPU so fast!?“. Well, I’ll write to you guys in the next week. until then,

Cya;)