Programming, Processing, and Computing

Computer Architecture
Computer Architecture

It’s not uncommon we use terms like multiprogramming, multitasking, multithreading, multiprocessing, etc., but often without consistent definitions. This post shares my definitions and perspectives on these terms.

Definitions

  • The term software may refer to a program or process.
  • A program is the static image of code at rest or in storage. It can be a text-based script or binary-coded executable and the target of static software testing.
  • A script is typically executed by an interpreter.
  • An executable refers to the object code transformed from the source code by a compiler.
  • A process is a program loaded into the memory and in execution. It is the target of dynamic software testing.
  • A task is a scheduling job, as a work unit, of the operating system.
  • A thread is the smallest execution unit of modern processors. A single-threaded process contains one thread, commonly known as the UI thread; a multi-threaded process comprises two or more threads.
  • Scheduling means the arrangement of tasks to be executed by the processor.

The Age of Punch Cards

In the good old days, a program was stored as decks of punch cards, loaded into the computers by operators, and executed one at a time. The following video demonstrates how to load a program as a pile of punch cards, execute it, and print the output.

However, loading and executing “programs” in this way is inefficient that wastes a lot of the processor time because of the setup or loading overhead. Moreover, most programs waste more processor time as they do nothing when waiting for I/O devices to finish work. It is when multiprogramming comes in.

Multiprogramming

In multiprogramming, programs are loaded into the computer in batch to improve setup or loading efficiency. Those loaded programs become tasks managed by the operating system (OS). One of the most crucial roles of the OS is to arrange their schedule for processing. Simple OSs schedule tasks or jobs in serial, executed one by one; smarter OSs can monitor those waiting for I/O, block them, and dispatch others for execution to achieve multitasking.

Multitasking

If an operating system on a computer can load multiple programs and schedule those tasks so that they can be processed simultaneously, it’s a multitasking system.

  • Microsoft DOS is NOT a multitasking OS because users can execute one program at a time.
  • Microsoft Windows 3.1 is a cooperative multitasking OS that loads and executes many Windows applications simultaneously. However, it relies on the voluntary release of the processor control by the application in execution.
  • Microsoft Windows 95 is a pre-emptive multitasking OS through slicing time into manageable time slots and allocating time slots to execute tasks. The OS keeps switching them for execution, aka context switching. On a single processor system, tasks are not performed “at the same time.” It is perceived as “simultaneous” because of the high computing speed of the processor. As tasks are switched very fast, people suppose they are running “simultaneous” or “at the same time.”

To sum up, multitasking is the capability for a computer to simultaneously perform multiple tasks, regardless of whether it is a computer with a single or multiple processors.

Multithreading

Multithreading is a programming technique to achieve multitasking, typically on a single-processor system. A thread can be a task. However, when we use the term task, it emphasizes the portability of being assigned to different processors on a multi-processor system. In contrast, a thread suggests that it’s dedicated to a specific processor or executed on a single-processor system. For example, Microsoft Task Parallel Library (TPL) focuses on “task” instead of thread so that the library can assign tasks to appropriate processors and manage them more efficiently.

Multiprocessing

Multiprocessing can be treated as the multi-processor version of multitasking. There are two types of Multiprocessing: Asymmetric Multiprocessing (ASMP) and Symmetric Multiprocessing (SMP). ASMP was the only method for handling multiple CPUs before SMP was available.

  • Processors on an ASMP system are not treated equally, e.g., one processor is reserved for the OS, and the other is for applications.
  • The SMP system uses identical processors that are “connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes.”

Computing using Multiple Computers

I’ve introduced computing on a single computer system. No matter how many processors the system has, it can provide multiprogramming to load multiple programs, multitasking to perform tasks simultaneously, and multiprocessing to utilize the power of multiple processors. However, some technologies make the idea a reality that a computer is made up of several computers; for example, clustering, grid computing, massively parallel processing (MPP), to name a few.

Parallel Computing

Parallel computing is an umbrella term for a variety of architectures that solve a problem with multiple computers (e.g., clustering) or computers made up of multiple processors (e.g., SMP).

References

Leave a Reply