Modern ap­plic­a­tions and games present bigger and bigger chal­lenges for hardware. Aside from soph­ist­ic­ated graphics and enormous demands on memory, one of the most important char­ac­ter­ist­ics of a well-running computer is the speed of the processor. To keep up with these chal­lenges, hardware man­u­fac­tur­ers regularly develop new tech­no­lo­gies and ar­chi­tec­tures for CPUs. One such tech­no­logy is mul­ti­th­read­ing, in which the processor works on multiple threads (that is, in­de­pend­ent units of execution) more or less sim­ul­tan­eously. How can that work? Keep reading to find out.

Register your domain name
Launch your business on the right domain
  • Free WordPress with .co.uk
  • Free website pro­tec­tion with one Wildcard SSL
  • Free Domain Connect for easy DNS setup

What is mul­ti­th­read­ing?

In order to increase the speed of the processor core without changing the frequency, you can use mul­ti­th­read­ing to have the CPU process several tasks at once. Or to be precise, you can have it process several threads at once. A thread is a sequence of pro­grammed in­struc­tions that’s part of a larger process. Programs can be broken down into processes and these processes can be broken down into in­di­vidu­al threads. Every process consists of at least one thread.

Processes are usually executed se­quen­tially - one process after the other. However, this can lead to lengthy tasks blocking the hardware, which is less than optimal. If another process needs to be executed, it will have to wait its turn. In the case of mul­ti­th­read­ing, multiple threads are processed more or less sim­ul­tan­eously. It’s rare that pro­cessing happens truly sim­ul­tan­eously, though it is now possible.

But even so-called pseudo-sim­ul­tan­eity provides a boost in per­form­ance. The system organises and computes threads so in­tel­li­gently that the user perceives it as sim­ul­tan­eous pro­cessing. This form of sim­ul­tan­eity is not to be confused with what a multicore processor can do. If the system has multiple mi­cro­pro­cessors, multiple processes will be processed sim­ul­tan­eously.

In order to ef­fect­ively implement mul­ti­th­read­ing, you’ll need properly prepared software. If de­velopers don’t (or can’t) separate their programs into multiple threads, the method is useless. For example, gamers have noticed that per­form­ance actually suffers from turning on mul­ti­th­read­ing. That would be because the computer games in question weren’t designed with mul­ti­th­read­ing in mind. The system’s attempt to process multiple threads at once then has a det­ri­ment­al effect.

Goals of mul­ti­th­read­ing

The ultimate goal of mul­ti­th­read­ing is to increase the computing speed of a computer and thus also its per­form­ance. To this end, we try to optimise CPU usage. Rather than sticking with a process for a long time, even when it’s waiting on data for example, the system quickly changes to the next task. This means that there is hardly any unused time.

At the same time, the system reacts more quickly to changes in pri­or­it­ies. If the user or an ap­plic­a­tion suddenly and un­ex­pec­tedly needs another task, the processor can quickly dedicate itself to that task, thanks to priority rankings and short threads.

The tech­no­logy is primarily designed to speed up in­di­vidu­al ap­plic­a­tions that consist of several processes and threads. Multiple tasks from the same software can be processed more or less parallel to one another. This is useful, for example, in video editing as one scene can be rendered in the back­ground while the user edits the next scene.

With mul­ti­th­read­ing, chip man­u­fac­tur­ers can speed up their CPUs without gen­er­at­ing sig­ni­fic­antly higher energy use. While increased frequency entails increased heat, which then has to be dis­sip­ated at high costs, this is not the case with mul­ti­th­read­ing.

How does mul­ti­th­read­ing work?

Mul­ti­th­read­ing is the result of in­ter­ac­tions between hardware and software. Programs and processes are broken down into in­di­vidu­al threads, which are then processed in order to execute the program. We make the dis­tinc­tion between hardware mul­ti­th­read­ing and software mul­ti­th­read­ing.

Hardware

For hardware mul­ti­th­read­ing, the in­di­vidu­al programs provide their processes in separated threads. The operating system takes control of the man­age­ment of the threads and decides when each of the threads should be sent to the CPU. The processor then tackles the threads either sim­ul­tan­eously or pseudo-sim­ul­tan­eously.

In practice, there are various im­ple­ment­a­tions of hardware mul­ti­th­read­ing.

Switch on Event Mul­ti­th­read­ing (SoEMT)

Switch on Event Mul­ti­th­read­ing works with two threads, one in the fore­ground and one in the back­ground. The change between the levels (known as a context switch) is triggered by events such as input from the user or the message that a thread is waiting on data and thus can’t be processed further. The system then quickly switches to a second thread and pushes the first one into the back­ground. It won’t be processed anymore until the necessary in­form­a­tion has arrived. This is how the system reacts quickly and creates a pseudo-sim­ul­tan­eity between threads in the fore­ground and the back­ground.

“Switch on Event Mul­ti­th­read­ing” is also known as “coarse grained mul­ti­th­read­ing”. The word “coarse” is used because the tech­no­logy is mostly suitable for long waiting times. While other tech­no­lo­gies react even faster, SoEMT works best with larger thread blocks.

Time slice mul­ti­th­read­ing

While with SoEMT the switch between threads is set off by an event, with time-slicing the switch takes place at set time intervals. Even if a thread hasn’t been completed, the processor starts computing another thread and only switches back at the next interval. Any progress in pro­cessing the thread is saved in RAM.

The challenge here is choosing the best interval length. If the timespan is too short, barely any progress can be made on the process. If it’s too long, you end up losing pseudo-sim­ul­tan­eity and the user would notice that the processes are taking place one after another and not at the same time.

Sim­ul­tan­eous mul­ti­th­read­ing (SMT)

Sim­ul­tan­eous mul­ti­th­read­ing (SMT) involves true sim­ul­tan­eity. Threads wait to be computed in so-called pipelines. The processor works on multiple pipelines in parallel. So rather than con­stantly switching between two threads, the parts of the process are actually handled sim­ul­tan­eously. A single processor thus acts like multiple (logical) pro­cessors. In practice, SMT is combined with multicore tech­no­logy, so that a system with two processor cores gives the im­pres­sion of having eight cores.

Fact

The CPU man­u­fac­turer Intel has been very suc­cess­ful with so-called hyper threading tech­no­logy (HTT). Their com­pet­it­or AMD has also started man­u­fac­tur­ing com­par­able tech­no­lo­gies. Both involve SMT.

mSZpDF-zUoI.jpg To display this video, third-party cookies are required. You can access and change your cookie settings here.

Software

With software mul­ti­th­read­ing, the ap­plic­a­tion alone is re­spons­ible for breaking down processes into threads. Only the in­di­vidu­al threads are delivered to the operating system and processor. The hardware is thus not aware of the con­nec­tions between threads and handles each thread in­di­vidu­ally. The system sets a priority level for each thread. Higher levels are processed more quickly. With this method, new processes that need to be finished quickly can be squeezed in. In the case of more long-term processes, only one thread will be completed and the others will be placed further back in the queue.

Fact

Software mul­ti­th­read­ing is mostly useful for systems with single-core pro­cessors. Since modern computers now come with at least a dual-core CPU, this form of mul­ti­th­read­ing has lost relevance.

Mul­ti­th­read­ing vs. mul­ti­task­ing

Mul­ti­th­read­ing and mul­ti­task­ing might look quite similar at first glance, but the two tech­no­lo­gies are based on different ideas. Mul­ti­task­ing simply means that multiple programs are running at the same time. The CPU switches between in­di­vidu­al tasks, but the ap­plic­a­tions aren’t sim­ul­tan­eously (neither truly sim­ul­tan­eously nor pseudo-sim­ul­tan­eously) computed. The operating system usually takes control of or­gan­ising the various tasks and assigns the CPU pending processes. It appears to the user as if several programs were being processed at once, but in reality it’s switching back and forth between them.

Tip

If you open the task manager, you can see which processes the system is running side-by-side. The same goes for the Mac task manager.

In the case of mul­ti­th­read­ing, a higher degree of sim­ul­tan­eity is aimed for. The tech­no­logy primarily aims to speed up in­di­vidu­al programs. While mul­ti­task­ing enables different programs to run side-by-side, mul­ti­th­read­ing involves multiple threads from the same program. The sim­ul­tan­eous pro­cessing of threads ensures that the software runs faster.

Summary

Mul­ti­th­read­ing is a smart, cost-saving method for in­creas­ing processor per­form­ance. However, it only works if the software is set up for it. If you want to increase your computer’s per­form­ance without im­ple­ment­ing mul­ti­th­read­ing, you also have a number of options. If you overclock the CPU, be sure to pay attention to CPU tem­per­at­ure - otherwise you could bring the whole system down.

Go to Main Menu