What is a Microcontroller and How Does it Work

gnupic.org


A microcontroller is an entire computer on a chip. It has a processor, memory, IO peripherals, and more all on a single silicon die. They are used to control consumer products and industrial systems.

They were originally created in the 1970's, and were used to build the first personal computers. Most lack an operating system. They just have a program that is automatically started whenever power is supplied to them.

The earliest versions had programs permanently written into memory. Once programmed, it is impossible to change those devices. Modern versions use flash memory, that allows their programs to be changed. The newest and most powerful versions now feature operating systems enabling them to do highly complex tasks.

Microcontroller programs are written much like other programs. Assembly language and C are popular languages. Microcontollers interact with the world through a number of peripherals. These can be simple switches, analog to digital converters, digital to analog converters, pulse width modulators, communication channels etc.

To facilitate programming, software emulators have been developed. Using these, programs for microcontrollers can be written, tested, and debugged before ever being loaded on to one. Software emulators have certain limitations. To overcome those, in circuit emulators have been developed. These connect microcontrollers via a cable to a computer, which can monitor its programs execution, and make changes as needed. Currently microcontrollers are a booming technology, and their future seems to be wide open.