Character Encoding Systems Analysis

Character Encoding System is a set of numerical values that are placed in a group of alphanumeric characters, punctuation characters, and special characters. In fact, encoding systems are equivalent to substitution cipher in which elements of coded information correspond to specific code symbols. However, it is incorrect to assume that encryption systems are the same as a coding system. The difference is that the ciphers contain a variable part called a key. There is no variable part in coding systems: information items can be replaced with code characters based on a table, or it can be defined using a function or coding algorithm.

We will write a
custom essay
specifically for you

for only $16.05 $11/page
308 certified writers online
Learn More

The reason underlying the creation of coding systems is the technical mismatch between human consciousness and computer memory. The set of letters and numbers which a person perceives has appeared unreadable for the computer: using binary system pioneers of computer epoch managed to enclose in combinations of zeros and units all English alphabet, numbers, and some additional symbols (Zentgraf, 2015). Thus, one of the very first single-byte ASCII encoding systems appeared: each character was encoded with eight bits of information. For example, the number “1” in the decimal number system corresponds to the ASCII code “0000000000”.

Historically, the ASCII system has undergone many modifications that have changed its structure (Zentgraf, 2015). Initially, the first seven bits defined a number, while the eighth bit, the parity bit, was a control function. Over time, the parity bit became used for encoding more elements, resulting in the ISO-8859-1 system (Zentgraf, 2015). Moreover, the evolution of encoding systems has led to a number of new systems, each of which was designed for specific tasks. One of the most important is the Unicode standard, introduced in 1991 (Zentgraf, 2015). Being not a particular system, but rather a large table, Unicode spawned a host of new subsystems.

Because Unicode was based on ASCII, it is often referred to as extended ASCII. In fact, Unicode, unlike its ancestor, is a table of 1,114,112 positions, most of which are still free (Zentgraf, 2015). Unicode encodings, including UTF-8 and UTF-16, are used to solve the problem of assigning a specific code character. The most convenient way to compare different encoding systems is to use the table below.

Criterion for comparison ASCII ISO-8859-1 Unicode
Mechanism 1 character
= 7 bits
1 character = 8 bits Can be encoded with different numbers of bits
Number of symbols 128 256 1 114 112
Destination For English only Some Western European languages Suitable for all national languages
General features The first 128 characters are encoded in an identical way


Zentgraf, D. C. (2015). What every programmer absolutely, positively needs to know about encodings and character sets to work with text. Web.

Print Сite this

Cite this paper

Select style


StudyCorgi. (2021, July 22). Character Encoding Systems Analysis. Retrieved from

Work Cited

"Character Encoding Systems Analysis." StudyCorgi, 22 July 2021,

1. StudyCorgi. "Character Encoding Systems Analysis." July 22, 2021.


StudyCorgi. "Character Encoding Systems Analysis." July 22, 2021.


StudyCorgi. 2021. "Character Encoding Systems Analysis." July 22, 2021.


StudyCorgi. (2021) 'Character Encoding Systems Analysis'. 22 July.

Copy to clipboard

This paper was written and submitted to our database by a student to assist your with your own studies. You are free to use it to write your own assignment, however you must reference it properly.

If you are the original creator of this paper and no longer wish to have it published on StudyCorgi, request the removal.

Psst... Stuck with your
assignment? 😱
Psst... Stuck with your assignment? 😱
Do you need an essay to be done?
What type of assignment 📝 do you need?
How many pages (words) do you need? Let's see if we can help you!