In this article we are going to dedicate some time to explain what are the threads of a processor or also called threads in English or subprocesses in programming, in order to identify the fundamental differences between these and the processor cores.
Among the less experienced and even more advanced users, there is still a lot of confusion about this topic. That is why we have set out to clarify these terms as far as possible.
This concept of processing threads is not essential to know when buying a processor for a normal user.
On most occasions, better rather than less, that is almost always fulfilled. Where we do need to know well what the threads are, is in the work of program development.
Depending on how an application is programmed and compiled, it will have a more optimized execution for processors with more threads than cores. And this is where we’ll try to come up with our explanation.
What are the cores of a processor?
Let’s start by explaining what the cores of our processor are, so we will have this previous knowledge not to confuse us.
We know that a processor is in charge of carrying out and executing the instructions of the programs that are loaded in the RAM memory of our computer.
Practically all the instructions that are necessary to perform the typical tasks on our PC, navigate, write, view photos, etc. pass through it.
In the physical section, a processor is an integrated circuit formed by millions of transistors that form logical gates to let pass or not the bits of data in the form of energy, without more.
So this small chip houses inside different modules that we can call nuclei, in addition to other elements that do not interest us now.
The processors of a few years ago had only one of these cores, and were able to process one instruction per cycle. These cycles are measured in Megahertz (MHz), the more MHz, the more instructions we can do every second.
Now we have not only one core, but several. Each kernel represents a subprocessor, that is, each one of these subprocessors will execute one of these instructions, being able to execute several of them in each clock cycle with a CPU of several cores.
If you have a 4-core processor, you can execute 4 instructions simultaneously instead of just one. Then the performance improvement quadruples. If we have 6, then 6 instructions at the same time. This is how today’s processors are far more powerful than the old ones.
And remember, these cores are physically present in our processor, it is not something virtual or created by code.
What are threads?
Threads, threads, or threads are not a physical part of the processor, not at least in terms of more cores or anything like that.
We can define a processing thread as the data control flow of a program. It is a means to manage the tasks of a processor and its different cores in a more efficient way.
Thanks to the threads, the minimum allocation units, which are the tasks or processes of a program, can be divided into chunks in order to optimize the waiting times of each instruction in the process queue. These pieces are called threads.
In other words, each processing thread contains a piece of the task to be performed, something simpler to perform than if we introduce the entire task into the physical core.
In this way the CPU is able to process several tasks at the same time and simultaneously, in fact, it will be able to do as many tasks as threads it has, and normally they are one or two for each core.
In processors that have for example 6 cores and 12 threads you will be able to divide the processes into 12 different tasks instead of only 6.
This way of working makes the system’s resources more equitably and efficiently managed. You know… he divides and you’ll win all your life.
These processors are called multi-threaded. For now, what we must have clear is that a processor with 12 threads is not going to have 12 cores, the cores are something of physical origin and the threads something of logical origin.
That has probably remained somewhat abstract and difficult to understand, so let’s see how it translates if we talk about the architecture of a program in our computer.
Programs, processes and threads
We all know what a program is, it is a code that is stored in our computer and that is intended to carry out some specific task.
An application is a program, a driver is also a program, and even the operating system is a program capable of running other programs inside it.
All of them are stored in binary form, since the processor only understands about ones and zeros, current/non-current.
To run a program, it is loaded into memory, RAM memory. This program is loaded through processes, which carry its associated binary code and the resources it needs to operate, which will be assigned “intelligently” by the operating system.
The basic resources a process needs are a program counter and a stack of records.
- Program counter (CP): it is called instruction pointer, and follows the sequence of instructions that are being processed.
- Records: it is a store located in the processor where an instruction, a storage address or any other data can be stored.
- Stack: it is the data structure that stores the information relative to the instances that a program has active in the computer.
Then each program is divided into processes, and is stored in a certain place in memory.
In addition, each process runs independently, and this is very important to understand because this is how the processor and system are able to execute several tasks at the same time, what we call multitasking system.
This processing system is to blame for the fact that we can continue working on our PC, even if a program has been blocked.
The threads of a process
This is where processing threads, called sub-processes in operating systems, come in. A thread is the unit of execution of a process. We can divide the process into sub-processes, and each of them will be an execution thread.
If a program is not multi-threaded, the processes within it will only have one thread, so they can only be processed once.
On the contrary, if we have multi-threaded processes, these can be divided into several pieces, and each of those threads shares the resources assigned to the process. That’s why we said multi-threading is more efficient.
In addition, each thread has its own stack of records so that two or more of them can be processed at the same time, as opposed to a single process, which will have to be executed all at once.
Sub-processes are simpler tasks that allow you to execute a process in a split form. And this is basically the final function of the processing threads.
The more threads, the greater the division of processes, and the greater the volume of simultaneous calculations and thus the greater the efficiency.
We haven’t finished yet, we still have the pending question of What happens then with a double-threaded core? We already said that each kernel is capable of executing only one instruction at a time. The CPU has a complex algorithm that divides the execution times as efficiently as possible, thus assigning each task a certain execution interval. The switch between tasks is so fast, that it will feel like the core executes tasks in parallel.
Can we see those threads or sub-processes in the system?
Not in a very detailed way, but yes, we will be able to see them, as much in Windows as Mac.
In the case of Windows, we will only have to open the task manager and place ourselves on “performance“. Then click below on the “resource monitor” link. In this new window we will have divided each process in CPU consumption and in threads, these will be the threads.
In the Mac activity monitor, we will have directly the subprocesses listed in the main screen.
You can also get digital manuals on this subject at Amazon, WalMart, Costco, Sams Club, Carrefour, alibaba, eBay, Aliexpress, Zappos, Target, Newegg, Etsy, My American Market, Macy’s, Staples , MyKasa. Each of these manuals can be found in great online offers.
In addition, in the App Store apps you can get these apps for free.