[H="3"]Value Types[/H]
Value types contain … values (Doh). A value is a type of data that is on its own, it just exists. To explain what I mean with that: a Reference type always points to something else, it is NOT on its own. But value types are just that – a value. Value types can be converted into each other and one Value type object can be assigned the data of another Value type object for example. Meaning: you can define a Value type called "hair" which contains a hair color name (text!), then convert hair into a Value type for numbers and have it now contain the number you get when a server translates text into numbers (ever tried in Excel to turn a text cell into a numeric cell? Yes, every text can be translated into a numeric value). Now as I said C# is very obsessive about data types and because of that there are tons of sub-types and sub-sub-types and you should understand each of them to use them properly …
TL DR version: For number values, always use numeric value types. If you have a variable that will contain small numbers, use byte or sbyte. If you need large numbers (above 255), you will likely end up with int. If you need a variable with a very wide range of values/large numbers both positive and negative AND you need digits beyond the dot, float is what you will most likely need. And if you need to handle letters or symbols or numbers as part of a text/sentence, char is your value type - but char allows only ONE letter/number digit/symbol. If you need more than one, you have to move on from Value types to the Reference type "string".
And if you need to count something (and not just store a single number), use enum, that is easiest for the server to digest.
Here is an overview over the Value type hierarchy:
- Structural
- Numeric
- Integral
- Floating-point
- Decimal
- Bool
- User-defined
- Enumeration
I think the above (while true and basic theory information) is not going to help for beginners much so we’ll ignore it for now – but if you want to go deeper in these terms, the list and hierarchy will help you to google information about C# and learn more! For the time being let’s just explain all of them as if they were the same hierarchy level
1. bool
Bool types contain the result of a very simple test/judgment: is something TRUE or is something FALSE (Is that hair color red? Is that speech the same as the previous? Do we have a certain property or event happening?). Every time you want to do that kind of test and store the result as data, you must define a bool type.
2. byte
A number between 0 and 255 (aka the possible values of a computer byte), but not stored as a regular number (2 … 5 … 10 etc) but converted into binary code, the way a computer counts, using only “0” and “1” and 8 positions for it. Why does that matter? A byte can contain the number between 0 and 255 while using MUCH less memory than having to actually store the number. That is because a computer does not count like human beings, so every time the actual number needs to be put in memory, it needs a rather lengthy “translation instruction” for the computer what that is and how to handle it. A little bit like explaining a blind and death man with extremely bad memory a color: you have to include all the information what color was again and what that color means and how writing that color in letters would be done etc etc. It is very inefficient.
On the other hand the byte storing way of numbers is extremely efficient for a computer. Byte storing of numbers is based on counting “electricity on” or “electricity off” states-of-being. And that is what a computer naturally works with, no translation needed. So this is how storing numbers in bytes works: you have a row of 8 positions, the bits. And every bit has ONE of two options: it can be switched off (0) or switched on (1). The bits each have values assigned to them. And the computer sums up the value of every bit which is switched on (1). The values assigned to each bit are: 1-2-4-8-16-32-64-128. You see the pattern? Always the double of the previous bit.
Now if the bit 1 is switched on (1) and all other bits are switched off (0) what value does that give the whole byte? Well, since the value of bit 1 is the only one that is on/1 and it has a value assigned of 1", we have a total byte value of 1. That means a byte of writing 0x1000000 (0x plus 8 bits, the 1st is on/1, the other bits are off/0) has a value of 1. Yes, that is very different from how humans count (from left to right and each an additional digit) but essential to understand when writing code.
Let’s try another example: We want to store a value of 50 in our byte. How would we do that? Well we can sum up the bits with the values 16 and 32 (bits 5 and 6), then we have already 48, if we add to that the bit with value 2 (position 2) we have a total sum of 50. So the byte would look like … 0x01001100. Bits 2, 5 and 6 are on/1 and thus summed up. Makes sense?
Now the good news: you don’t need to do these calculations every time to figure out how the byte looks, simply declaring the byte as having that value is enough. The server handles the rest (figuring out how the byte looks like). But you should understand how that work for the use of your types. And if you have to handle values in your scripting which will not go above 255 (which is the total sum of every bit switched on) you now know to use a byte type because it only needs a memory space of 1 byte, the minimal data type size C# has.
3. sbyte
Same as byte, but the range is shifted, instead of 0-255, it is -128 to +127. sbyte ... like "shifted byte". Does that number range ring a bell? Yes, it is the Ultima Online minimal/maximum z value for the map/building, so obviously that value is defined by an sbyte.
4. ushort
Same as byte, but now double the amounts of bits (basically 2 bytes after each other now used to store a single number/sum). Because it is two bytes instead of one it needs the double amount of memory. The range of possible values stored in a ushort type is numbers between 0 to 65,535.
5. short
The same as ushort but - similar as sbyte is a byte shifted for half of its value range into negative numbers - short is the same as ushort shifted by half of its value range into negative numbers. In other words: instead of a range of numbers from 0 to 65,535 it contains a numbers range between -32,768 and 32,767. Again of course two bytes memory used to store that.
6. uint
Same as ushort, just again the double size (4 bytes) to store a single number. The range of possible values that one can store is 0 to 4,294,967,295.
7. int
Same as uint (4 bytes for one number stored) but shifted into the negative range for half of the numbers range again. So the range of possible stored values is between -2,147,483,648 and 2,147,483,647.
8. ulong
If you’ve followed me until here you can guess what that is … yes, 8 byte (double size as uint). The range of possible values is between 0 to 18,446,744,073,709,551,615.
9. long
You can guess it … yes that one uses 8 bytes too but is shifted half its range into the negatives. The range is between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807.
10. float
Now we leave the normal numbers and enter the realm of numbers who are having digits after the dot. The first value type for that is called “float. The float is your usual bet for number handling when you need your value to contain digits after the dot. The range is 1.5 x 10-45 to ±3.4 x 10 38 and float value types store 7 digits after the dot. Float needs 4 bytes for storage.
11. double
Similar to float, that one allows even larger numbers to be stored and has a higher precision (more digits after the value stored). The value range for double is ±5.0 × 10−324 to ±1.7 × 10308 and it stores 15 digits after the dot. Double needs … double the storage space of a float, 8 bytes instead of 4.
12. decimal
Decimal is again more precise and allowing a larger number to store than double. The range of possible values lies between -7.9 x 10 to the 28 and 7.9 x 10 to the 28 … have fun. Use a decimal for extremely large positive or negative numbers where you need 28-29 digits after the dot. Which means very likely: never. Usually decimal is only needed if working with certain math or physics science equations. A decimal needs 12 byte to be stored.
13. char
A Char type contains text. It is the most alien version of data to a computer which naturally only handles bytes, nothing like a letter at all, at least numbers are somewhat familiar to it. So storing text takes the most memory and processing power for a server. The important point is that for a computer “text” is so alien, that it can contain anything, even special signs (exclamation marks!) or NUMBERS. The server will treat anything inside a char pretty much like “that unintelligible stuff”. So yes, you can put anything into a char, you can’t do math with it and you are limited to how you can manipulate it, because the server is essentially quite “blind” to anything defined as a Char.
Now the crucial part: A Char type is very comfortable; it allows you to avoid any headache about the Value type selection if you just use Char types for everything. But not only does that mean lots of memory use and performance loss and being limited to how you can manipulate the contained data – but it allows you to break the main advantage of having these Value types in the first place … avoiding coding errors. The server will not be able to track if what you do to this is wrong, if you are producing garbage and feeding failures into the whole system. Any manipulation you do, will be followed blindly no matter what it produces as result. To put it blindly: you are going to waste resources and likely end up working like a butcher instead of a surgeon at least some places. So use Char types only if really necessary please!
Usage/memory: A char is really only a SINGLE letter/sign/symbol/number. You can extend a char to contain more than one “position” – called a “string” - but each position needs 2 byte to be stored. So basically a sentence of 16 positions including letters/empty space/punctuation etc. means already 32 bytes used!
14. enum
15. struct
Short version of Value types
(courtesy cds333 at https://social.msdn.microsoft.com/F...imal-vs-double-difference?forum=csharpgeneral):
C# Type|.Net Framework (System) type|Signed?|Bytes Occupied|Possible Values
sbyte|System.Sbyte|Yes|1|-128 to 127
short|System.Int16|Yes|2|-32768 to 32767
int|System.Int32|Yes|4|-2147483648 to 2147483647
long|System.Int64|Yes|8|-9223372036854775808 to 9223372036854775807
byte|System.Byte|No|1|0 to 255
ushort|System.Uint16|No|2|0 to 65535
uint|System.UInt32|No|4|0 to 4294967295
ulong|System.Uint64|No|8|0 to 18446744073709551615
float|System.Single|Yes|4|Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double|System.Double|Yes|8|Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal|System.Decimal|Yes|12|Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
char|System.Char|N/A|2|Any Unicode character (16 bit)
bool|System.Boolean|N/A|1 / 2|true or false
Edit: the table command seems to be broken in the forum?