Unicode stored in C char

Go To StackoverFlow.com


I'm learning the C language on Linux now and I've came across a little weird situation.

As far as I know, the standard C's char data type is ASCII, 1 byte (8 bits). It should mean, that it can hold only ASCII characters.

In my program I use char input[], which is filled by getchar function like this pseudocode:

char input[20];
int z, i;
for(i = 0; i < 20; i++)
   z = getchar();
   input[i] = z;

The weird thing is that it works not only for ASCII characters, but for any character I imagine, such as @&@{čřžŧ¶'`[łĐŧđж←^€~[←^ø{&}čž on the input.

My question is - how is it possible? It seems to be one of many beautiful exceptions in C, but I would really appreciate explanation. Is it a matter of OS, compiler, hidden language's additional super-feature?


2012-04-04 18:41
by Miroslav Mares
It's not really characters, it's bytes that are gotten with getchar(). Every character is encoded as a byte sequence - Daniel Fischer 2012-04-04 18:45
These are relatively normal characters. Try widening your imagination to include, say, some Chinese or Japanese letters. Or try Cyrillic for a change :) Here's "Hello" in Russian for you: "Привет" - dasblinkenlight 2012-04-04 18:45
@DanielFischer I understand, that getchar() decodes it into byte(s). But I already don't understand, how that bytes can be held in char data type, which should be one byte - Miroslav Mares 2012-04-04 18:48
No, getchar() doesn't decode it into bytes. The input buffer from which getchar() reads already contains the possibly several bytes making up the character you typed. Each getchar() gets you one of the bytes, so for UTF-8 encoded input, a character can take up to four getchar(). When you print it out, the byte sequence is sent to the terminal and that translates it into glyphs - Daniel Fischer 2012-04-04 18:53
Great, thanks, I completely understand now - Miroslav Mares 2012-04-04 19:03


There is no magic here - The C language gives you acess to the raw bytes, as they are stored in the comptuer memory. If your terminal is using utf-8 (which is likely), non-ASCII chars take more than one byte in memory. When you display then again, is our terminal code which converts these sequences into a single displayed character.

Just change your code to print the strlen of the strings, and you will see what I mean.

To properly handle utf-8 non-ASCII chars in C you have to use some library to handle them for you, like glib, qt, or many others.

2012-04-04 18:46
by jsbueno
or try to print just input[ 0 ] to see that it won't print the first character, but only the first byte which most probably will be an unprintable character, and then try printing input[ 0 ] and input[ 1 ] together to see the multibyte character - abresas 2012-04-04 18:48
Ok, I have just tried some code modifications and it works exactly as described. Thank you.

Only a note about wide characters - <wchar.h> isn't enough for proper handling of wide characters - Miroslav Mares 2012-04-04 19:07


ASCII is a 7 bit character set. In C normally represented by an 8 bit char. If highest bit in an 8 bit byte is set, it is not an ASCII character.

Also notice that you are not guaranteed ASCII as base, tho many ignore other scenarios. If you want to check if a "primitive" byte is a alpha character you can in other words not, when taking heed to all systems, say:

is_alpha = (c > 0x40 && c < 0x5b) || (c > 0x60 && c < 0x7b);

Instead you'll have to use ctype.h and say:


Only exception, AFAIK, is for numbers, on most tables at least, they have contiguous values.

Thus this works;

char ninec  = '9';
char eightc = '8';

int nine  = ninec  - '0';
int eight = eightc - '0';

printf("%d\n", nine);
printf("%d\n", eight);

But this is not guaranteed to be 'a':

alhpa_a = 0x61;

Systems not based on ASCII, i.e. using EBCDIC; C on such a platform still runs fine but here they (mostly) use 8 bits instead of 7 and i.e. A can be coded as decimal 193 and not 65 as it is in ASCII.

For ASCII however; bytes having decimal 128 - 255, (8 bits in use), is extended, and not part of the ASCII set. I.e. ISO-8859 uses this range.

What is often done; is also to combine two or more bytes to one character. So if you print two bytes after each other that is defined as say, utf8 0xc3 0x98 == Ø, then you'll get this character.

This again depends on which environment you are in. On many systems/environments printing ASCII values give same result across character sets, systems etc. But printing bytes > 127 or double byted characters gives a different result depending on local configuration.


Mr. A running the program gets


While Mr. B gets


This is perhaps especially relevant to the ISO-8859 series and Windows-1252 of single byte representation of extended characters, etc.

  • UTF-8#Codepage_layout, In UTF-8 you have ASCII, then you have special sequences of byes.
    • Each sequence starts with a byte > 127 (which is last ASCII byte),
    • followed by a given number of bytes which all starts with the bits 10.
    • In other words, you will never find an ASCII byte in a multi byte UTF-8 representation.

That is; the first byte in UTF-8, if not ASCII, tells how many bytes this character has. You could also say ASCII characters say no more bytes follow - because highest bit is 0.

I.e if file interpreted as UTF-8:


if c  < 128, 0x80, then ASCII
if c == 194, 0xC2, then one more byte follow, interpret to symbol
if c == 226, 0xE2, then two more byte follows, interpret to symbol

As an example. If we look at one of the characters you mention. If in an UTF-8 terminal:

$ echo -n "č" | xxd

Should yield:

0000000: c48d ..

In other words "č" is represented by the two bytes 0xc4 and 0x8d. Add -b to the xxd command and we get the binary representation of the bytes. We dissect them as follows:

 ___  byte 1 ___     ___ byte 2 ___                       
|               |   |              |
0xc4 : 1100 0100    0x8d : 1000 1101
       |                    |
       |                    +-- all "follow" bytes starts with 10, rest: 00 1101
       + 11 -> 2 bits set = two byte symbol, the "bits set" sequence
               end with 0. (here 3 bits are used 110) : rest 0 0100

Rest bits combined: xxx0 0100 xx00 1101 => 00100001101
                       \____/   \_____/
                         |        |
                         |        +--- From last byte
                         +------------ From first byte

This give us: 00100001101 2 = 26910 = 0x10D => Uncode codepoint U+010D == "č".

This number can also be used in HTML as &#269; == č

Common for this and lots of other code systems is that an 8 bit byte is the base.

Often it is also a question about context. As an example take GSM SMS, with ETSI GSM 03.38/03.40 (3GPP TS 23.038, 3GPP 23038). There we also find an 7bit character table, 7-bits GSM default alphabet, but instead of storing them as 8 bits they are stored as 7 bits1. This way you can pack more characters into a given number of bytes. Ie standard SMS 160 characters becomes 1280 bits or 160 bytes as ASCII and 1120 or 140 bytes as SMS.

1 Not without exception, (it is more to the story).

I.e. a simple example of bytes saved as septets (7bit) C8329BFD06 in SMS UDP format to ASCII:

7 bit UDP represented          |         +--- Alphas has same bits as ASCII
as 8 bit hex                   '0.......'
C8329BFDBEBEE56C32               1100100 d * Prev last 6 bits + pp 1
 | | | | | | | | +- 00 110010 -> 1101100 l * Prev last 7 bits 
 | | | | | | | +--- 0 1101100 -> 1110010 r * Prev 7 + 0 bits
 | | | | | | +----- 1110010 1 -> 1101111 o * Last 1 + prev 6
 | | | | | +------- 101111 10 -> 1010111 W * Last 2 + prev 5
 | | | | +--------- 10111 110 -> 1101111 o * Last 3 + prev 4
 | | | +----------- 1111 1101 -> 1101100 l * Last 4 + prev 3
 | | +------------- 100 11011 -> 1101100 l * Last 5 + prev 2
 | +--------------- 00 110010 -> 1100101 e * Last 6 + prev 1
 +----------------- 1 1001000 -> 1001000 H * Last 7 bits
                                    +----- GSM Table as binary

And 9 bytes "unpacked" becomes 10 characters.

2012-04-04 18:58
by Morpfh
This article is simply great! Thank you for the summary and overview - Miroslav Mares 2012-04-05 18:07
@Mimars; Became a bit long, but, :). It is an interesting topic and find it fun to see how things have been solved. Also think it is educational in that one can use similar logic when coding - also completely different things.

There are also quite a few beauties with ASCII and how everything is arranged and sorted - i.e.: pp3 here http://faculty.kfupm.edu.sa/ics/said/ics232Lectures/L11_LogicInstructions.doc.

  • It is also educational to look at i.e. /usr/include/ctype.h etc.
  • - Morpfh 2012-04-05 19:08


ASCII is 7 bits, not 8 bits. a char [] holds bytes, which can be in any encoding - iso8859-1, utf-8, whatever you want. C doesn't care.

2012-04-04 18:45
by evil otto


There is a datatype wint_t (#include <wchar.h>) for non-ASCII characters. You can use the method getwchar() to read them.

2012-04-04 18:48
by greg


This is the magic of UTF-8, that you don't even had to worry about how it works. The only problem is that the C data-type is named char (for character), while what it actually means is byte. there is no 1:1 correspondence between characters and the bytes that encode them.

What happens in your code is that, from the program's point of view, you input a sequence of bytes, it stores the bytes in memory and if you print the text it prints bytes. This code doesn't care how these bytes encode the characters, it's only the terminal which needs to worry about encoding them on input and correctly interpreting them on output.

2012-05-02 07:56
by ybungalobill


There are of course many libraries that does the job, but to quickly decode any UTF8 unicode, this little function is handy:

typedef unsigned char utf8_t;

#define isunicode(c) (((c)&0xc0)==0xc0)

int utf8_decode(const char *str,int *i) {
    const utf8_t *s = (const utf8_t *)str; // Use unsigned chars
    int u = *s,l = 1;
    if(isunicode(u)) {
        int a = (u&0x20)? ((u&0x10)? ((u&0x08)? ((u&0x04)? 6 : 5) : 4) : 3) : 2;
        if(a<6 || !(u&0x02)) {
            int b,p = 0;
            u = ((u<<(a+1))&0xff)>>(a+1);
            for(b=1; b<a; ++b)
                u = (u<<6)|(s[l++]&0x3f);
    if(i) *i += l;
    return u;

Considering your code; you can iterate the string and read the unicode values:

int l;
for(i=0; i<20 && input[i]!='\0'; ) {
   if(!isunicode(input[i])) i++;
   else {
      l = 0;
      z = utf8_decode(&input[i],&l);
      printf("Unicode value at %d is U+%04X and it\'s %d bytes.\n",i,z,l);
      i += l;
2016-02-11 06:12
by Per Löwgren