Each type of integer has a different range of storage capacity
Type Capacity
Int16 — (-32,768 to +32,767)
Int32 — (-2,147,483,648 to +2,147,483,647)
Int64 — (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)
As stated by James Sutherland in his answer:
int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
‘an integer’, Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it’s safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.
The resulting code will be identical: the difference is purely one of
readability or code appearance.
The only real difference here is the size. All of the int types here are signed integer values which have varying sizes
Int16: 2 bytes
Int32 and int: 4 bytes
Int64 : 8 bytes
There is one small difference between Int64 and the rest. On a 32 bit platform assignments to an Int64 storage location are not guaranteed to be atomic. It is guaranteed for all of the other types.