Even if you don’t care about the practical applications of it in modern programming, it offers a look back when programming was more of a pain in the ass because shit needed to be set up properly as well as having a good algorithm. Actually it is still relevant in deeper technical programming that is closer to the hardware level. Probably not so much for app developers.
So let’s say you have an array using 16-bit integers. So you would have something like this:
unsigned short *myArray = new unsigned short;
So you fill it up with data and all is good right? Well now your next requirement is that you want to divide each element into two parts, the lower byte and upper byte because the upper byte indicates the flashing pattern of the red LED and the lower byte indicates the flashing pattern of the green LED. Well FML, right? No!
We make a new byte pointer (a char in this case) and make it point to the typecasted main array:
unsigned char* birchPointer; birchPointer = (unsigned char*) myArray;
Now because a char is a byte in size, when you advance the index it will point to the next 8-bit position. So let’s say
myArray contains the following information:
myArray = 0xBEEF; myArray = 0xCAFE; myArray = 0xBABE;
Now if you use the char pointer to access myArray, you’ll get the following information (providing your system is big endian):
birchPointer = 0xBE; birchPointer = 0xEF; birchPointer = 0xCA; birchPointer = 0xFE; birchPointer = 0xBA; birchPointer = 0xBE;
If you’re using a machine that’s little endian (which most PCs are), then the byte values will be swapped like so (due to little endian storing the lower byte first in the memory sequence):
birchPointer = 0xEF; birchPointer = 0xBE; birchPointer = 0xFE; birchPointer = 0xCA; birchPointer = 0xBE; birchPointer = 0xBA;
So there you have it: splitting a 16-bit array into an 8-bit array and learning something about how endians affect it.