Binary, Octal, and Hexadecimal: Number Systems Every Developer Needs

Binary, Octal, and Hexadecimal: Number Systems Every Developer Needs

Every experienced developer has internalized a set of mental shortcuts for switching between number bases. You glance at 0xFF and read 255. You see 0755 in a file permission and know immediately what it means. You look at a hex dump and understand which bytes form which values. None of this is magic — it's a small set of patterns that become automatic with use.

Why Computers Use Binary

A transistor has two stable states: conducting current (on) or not conducting (off). It's the most reliable digital switch we can build, and we can fit billions of them on a chip. With only two states, the natural number system for hardware is base-2 (binary), where each digit is either 0 or 1.

Every number, character, instruction, and pixel your computer processes is ultimately represented as binary. An 8-bit byte can hold 256 distinct values (2⁸). A 32-bit integer can hold ~4.3 billion values. A 64-bit integer can hold ~18.4 quintillion values.

You rarely work with binary directly because it's verbose — 255 requires 8 binary digits: 11111111. But understanding the underlying structure unlocks everything else.

Converting Between Bases

The most useful mental operation is converting between binary, decimal, and hexadecimal.

Binary to decimal: Each bit position has a value that's a power of 2, starting from the right at 2⁰.

1 0 1 1 0 1 0 0
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
128 + 0 + 32 + 16 + 0 + 4 + 0 + 0 = 180

Decimal to binary: Repeatedly divide by 2 and collect the remainders from bottom to top.

180 ÷ 2 = 90 remainder 0
90  ÷ 2 = 45 remainder 0
45  ÷ 2 = 22 remainder 1
22  ÷ 2 = 11 remainder 0
11  ÷ 2 =  5 remainder 1
5   ÷ 2 =  2 remainder 1
2   ÷ 2 =  1 remainder 0
1   ÷ 2 =  0 remainder 1
Read upward: 10110100 = 180

For quick decimal-to-hex-to-binary conversions without paper, the Number Base Converter handles any base instantly.

Why Hex Is Everywhere

Hexadecimal (base-16) uses digits 0–9 and letters A–F. One hex digit represents exactly 4 bits. Two hex digits represent exactly 8 bits — one byte.

That alignment is why hex dominates in systems programming, networking, and debugging. Binary is unwieldy to read; decimal doesn't map cleanly to bit boundaries. Hex is the sweet spot. (Wikipedia's hexadecimal article walks through the history and notation conventions.)

Binary:   1111 1111
Hex:      F    F     = 0xFF = 255 decimal

Binary:   1010 0011
Hex:      A    3     = 0xA3 = 163 decimal

A few values worth memorizing because they appear constantly:

Hex Decimal Binary
0x0F 15 0000 1111
0xFF 255 1111 1111
0x80 128 1000 0000
0x7F 127 0111 1111
0x100 256 1 0000 0000

Hex prefixes vary by language: 0x in C, Python, JavaScript; # in CSS; U+ for Unicode code points; \x in string literals.

Octal and File Permissions

Octal (base-8) uses digits 0–7. One octal digit represents exactly 3 bits.

Octal was historically used on systems with 12-, 24-, or 36-bit words where grouping by 3 was natural. Its main modern relevance is Unix file permissions.

A file's permission bits are 9 bits: three groups of three bits for owner, group, and others. Each group: read (4), write (2), execute (1). Three bits = one octal digit.

chmod 755  →  111 101 101
              rwx r-x r-x
              7   5   5

chmod 644  →  110 100 100
              rw- r-- r--
              6   4   4

chmod 600  →  110 000 000
              rw- --- ---
              6   0   0

When you see 0755, the leading 0 is the octal prefix in C/Python. The mental arithmetic is fast once you know the bit patterns: 7 = full permissions, 5 = read+execute, 4 = read only, 6 = read+write.

Bit Manipulation Operators

These operators work directly on the binary representation of integers.

a = 0b10110100  # 180
b = 0b11110000  # 240

a & b   # AND:  10110000 = 176  (both bits must be 1)
a | b   # OR:   11110100 = 244  (either bit can be 1)
a ^ b   # XOR:  01000100 = 68   (bits differ → 1)
~a      # NOT:  result is -181 in Python (bitwise complement of 180 in two's complement)
a << 2  # LEFT SHIFT:  10110100 → 1011010000 = 720  (multiply by 4)
a >> 2  # RIGHT SHIFT: 10110100 → 00101101   = 45   (integer divide by 4)

Shifting left by n is equivalent to multiplying by 2ⁿ. Shifting right by n is integer division by 2ⁿ. These were once critical for performance; today compilers do this optimization automatically, but you'll encounter these patterns in legacy code, hardware registers, and low-level protocols. (Wikipedia's bitwise operation overview covers each operator with truth tables.)

Bitmasks and Flags

Bitmasks let you pack multiple boolean flags into a single integer. Each bit represents one flag. Testing and setting flags is done with AND and OR.

const READ    = 0b001  // 1
const WRITE   = 0b010  // 2
const EXECUTE = 0b100  // 4

let perms = READ | WRITE  // 0b011 = 3, has both flags set

// Test if a flag is set
(perms & READ) !== 0      // → true
(perms & EXECUTE) !== 0   // → false

// Set a flag
perms |= EXECUTE           // 0b111 = 7

// Clear a flag
perms &= ~WRITE            // 0b101 = 5

// Toggle a flag
perms ^= READ              // 0b100 = 4

This pattern appears everywhere: Unix permissions, network protocol flags, HTML input event modifier keys (event.ctrlKey, event.shiftKey), OpenGL rendering state, and database permission systems.

Hex in Colors

CSS and design tools use hex notation for RGB colors. #RRGGBB — two hex digits (one byte) per color channel.

#FF0000  → R=255, G=0,   B=0    (red)
#00FF00  → R=0,   G=255, B=0    (green)
#1A2B3C  → R=26,  G=43,  B=60   (dark blue-gray)

The shorthand #RGB expands each digit: #F0A#FF00AA. When you see a design with #E5E7EB, you can read it immediately: slightly below full white (0xFF = 255) in all channels, which means a light gray.

Alpha-channel hex adds a fourth byte: #RRGGBBAA. So #00000080 is black at 50% opacity (0x80 = 128, which is 128/255 ≈ 50%).

Reading a Hex Dump

A hex dump shows the raw bytes of a file alongside their printable ASCII representation. This is how you look at binary data without a specialized parser.

Offset  Hex bytes                                ASCII
00000000  25 50 44 46 2d 31 2e 34  0a 25 e2 e3 cf d3 0a 36  |%PDF-1.4.%.....6|

Each pair of hex digits is one byte. The ASCII column shows printable characters and dots for non-printable bytes. The first four bytes 25 50 44 46 are %PDF — the PDF magic number.

Hex dumps let you verify file signatures (magic bytes at the start of a file), inspect binary protocols, and debug serialization issues. On the command line, xxd and hexdump -C produce this output.

Practical Examples in Common Languages

# Python
bin(255)        # '0b11111111'
oct(255)        # '0o377'
hex(255)        # '0xff'
int('ff', 16)   # 255
int('11111111', 2)  # 255
f'{255:08b}'    # '11111111' (8-bit binary string)
f'{255:02x}'    # 'ff' (lowercase hex)
// JavaScript
(255).toString(2)   // '11111111'
(255).toString(8)   // '377'
(255).toString(16)  // 'ff'
parseInt('ff', 16)  // 255
parseInt('377', 8)  // 255

MDN's `Number.prototype.toString()` reference covers the radix argument and the rare cases where it differs from native bigint serialization.

// C
printf("%d %o %x\n", 255, 255, 255);  // 255 377 ff
int x = 0xFF;     // hex literal
int y = 0377;     // octal literal (leading zero)
int z = 0b11111111;  // binary literal (GCC extension, C23 standard)

The Number Base Converter handles base conversion for any value instantly — useful when you need to cross-check a conversion or work with unusual bases beyond 2, 8, 10, and 16.

For understanding how these byte values connect to text encoding — why 0x41 is 'A' and why 0xE2 0x82 0xAC is '€' — see How UTF-8 and Unicode Work. For how binary data gets encoded for transmission as text (Base64), Encoding vs Encryption vs Hashing explains where that fits in the broader picture.

The Hash Generator and Base64 Encoder both output hex strings — once you can read hex fluently, their outputs become immediately interpretable rather than opaque character sequences.

FAQ

Why is hex base 16 specifically?

Because 16 is 2^4 — exactly four bits per hex digit, which means two hex digits map perfectly to one byte (8 bits). This makes hex the natural compact representation of byte-aligned data: a 32-bit integer is always exactly 8 hex digits, a SHA-256 hash is exactly 64 hex digits. Other bases (decimal, octal) don't align to byte boundaries, so they need awkward leading zeros and width handling.

Should I use octal for file permissions or symbolic notation?

For scripts and code, use octal — chmod 755 is unambiguous and shorter than chmod u+rwx,go+rx. For interactive use, symbolic is more readable for incremental changes (chmod +x is clearer than figuring out the octal). Both produce the same result. Modern best practice: octal for setting absolute permissions, symbolic for relative changes.

Why does JavaScript's `parseInt('08')` return 0 in older versions?

It used to interpret a leading zero as octal mode, so parseInt('08') was "parse 8 in base 8" — but 8 isn't a valid octal digit, so it returned 0. ES5 (2009) deprecated this behavior; modern parsers default to base 10 unless you specify the radix. Always pass the radix explicitly: parseInt('08', 10) returns 8. This is a long-standing source of subtle bugs.

What's the difference between bitwise AND and logical AND?

Bitwise AND (&) operates on each bit position of two integers — 0b1100 & 0b1010 = 0b1000. Logical AND (&& in C/JS, and in Python) returns the second operand if the first is truthy, else the first. They look similar but do completely different things. 5 && 3 is 3 (logical); 5 & 3 is 1 (bitwise). Beginners often confuse them in conditional expressions.

How do negative numbers work in binary?

Two's complement is the universal scheme. To represent -N in n bits: invert all bits of N (one's complement), then add 1. For 8-bit signed: 5 is 00000101, -5 is 11111011. The leading bit indicates sign (1 = negative). This makes addition/subtraction work without special-casing — adding -5 to 10 gives the right answer using normal binary addition. ~x in most languages is one's complement; -x is two's complement.

Why do hex colors have 6 digits, not 8?

Six digits is the historical RGB format (2 digits each for red, green, blue, no alpha). Eight-digit hex (#RRGGBBAA) was added later for alpha channel support. CSS Color 4 supports both, with universal browser support since 2020. Four-digit shorthand (#RGBA) expands each digit to two — #F0A8 means #FF00AA88. When in doubt, use the 6-digit form for opaque colors and 8-digit only when you need alpha.

What's the largest integer JavaScript can safely represent?

Number.MAX_SAFE_INTEGER is 2^53 - 1 = 9,007,199,254,740,991. JavaScript numbers are IEEE 754 64-bit floats; integers larger than this lose precision because the mantissa is only 53 bits wide. For larger integers (database IDs, timestamps in nanoseconds, cryptographic values), use BigInt (123n) or strings. Languages like Python and Java have arbitrary-precision integers natively; JS does too, but only via BigInt.

When should I use binary literals (0b...) in code?

When the value you're constructing has meaningful bit patterns that would be obscured by hex or decimal. Examples: bitmask flags (0b001, 0b010, 0b100), Unicode code point ranges, hardware register layouts, simple state machines. For arbitrary integer values, decimal or hex is clearer. Most modern languages support binary literals natively (Python 3, ES2015 JS, C23, Rust).