Technology Careers
HDFC Data Digits HDFC ERGO Technocrat Kotak Mahindra Bank-PGP in Full Stack Software Engineering CSB Bank-Digital Transformation Skills Program FreeCharge by Axis Bank - FinTech Engineering Program Post Graduate Program in Full Stack Software Engineering
Banking and Finance Careers
Axis Bank – Priority Banking Program Axis Bank Young Bankers Program HDFC - Relationship Manager Axis Bank - Virtual Sales & Relationship Management Program Excelerate - Wealth Manager Program HDFC - Certification in Virtual Relationship Management HDFC- Trade Finance Operations IndusInd Bank – Banking Sales and Business Development Bajaj Finserv – Beyond Program
Add To Bookmark

Multiprocessing with Python


By NIIT Editorial

Published on 11/08/2021

8 minutes

Multiprocessing is a combination of spawning processes that uses an API similar to the threading module. By using sub-processes instead of threads, the multiprocessing package deal provides both local and distant concurrency, thereby avoiding the Global interpreter lock (GDL). If all the cores are needed to be used in the machine, they must fork processes for higher and increased speed. Proceeding with a group of methods can be difficult because if communication among processes is required, it can frequently get complicated to coordinate.

The Multiprocessing package in Python contains an innovative and clear API for dividing work between many processes.

Start Methods:

Multiprocessing supports different ways to start a process. Those ways are as follows:

  • Spawn: It's a method for users to quickly create and perform many jobs with complex parameter changes. When compared to the other two techniques, Fork and Forkserver, this method takes longer to start a function.
  •  Fork: Fork is a process or a method that uses os.fork() in Python to create an interpreter process i.e., a child process. When a new interpreter process is created, both processes will carry out the next instruction.

Basic fork in Python:

Let us have a look at a basic fork in Python
 

import os
import sys
import time

if __name__ == "__main__":
  child_id = os.fork()
  #get parent pid
  if child_id != 0 :
    print("Parent PID :",os.getpid())
  #get child pid
  else :
    print("Child PID : ",os.getpid())

  Output :

The next example compliments the original fork code and sets an environmental variable that is then copied into the child process. 

Example 1: Fork in Python:

 

#Using fork to create child process

import os
import sys
import time

if __name__ == "__main__":
child_id = os.fork()
print("Old value of env variable : ",os.environ["USER"])
#change variable
os.environ['USER']="ADMIN"
print("New value of env variable : ",os.environ["USER"])
#get parent pid
if child_id != 0 :
print("Parent PID :",os.getpid())
print("Parent Process Value of env variable : ",os.environ["USER"])
#get child pid
else :
print("Child PID : ",os.getpid())
print("Child Process Value of env variable : ",os.environ["USER"])

Output

In the above output, you can see that the “changed” environment variable fork is stuck with the child processes in addition to the parent process. This example can be tested by further changing the environmental variable in the parent process, and it can be observed that the child is separated now. The subprocess module, though less complicated than the multiprocessing module, can be used to handle fork processes.

Multiprocessing:

Simple Multiprocessing:

Now that you have hands-on the basics with the forking in Python let us look at an example of how a higher-level multiprocessing library works.

Example 2:
 

import multiprocessing
import sys
import os
from time import sleep

def childThread(thread_id,duration):
child_id = os.getpid()
parent_id = os.getppid()
print("*******Inside Child********")
print("Child ID : ",child_id)
print('Process sleeping...')
sleep(duration)
print('Process sleeping done...')


if __name__ == '__main__':
print("*******Inside parent********")
#get id
pid = os.getpid()
print("Parent ID : ",pid)
#create multiprocess thread
thread = multiprocessing.Process(target=childThread, args=('Child1',5))
thread.start()
print("Thread started..")
thread.join()
print("Thread completed..")

 

Output :

It is seen that the main process forks an interpreter process and sleeps for 5 seconds. The instructions of this process appear when  p.start() gets called.

Building An asynchronous Net-SNMP engine:

Until now, nothing useful has been built. The following example will solve realistic and practical problems by formulating Python bindings for Net-SNMP asynchronous.

Before starting the process, check whether few programs are being installed in your system or not for using both multiprocessing library with Python 2.6 and Net SNMP bindings:

  1. Firstly, download Python from Python Download :

 

  1. Adjust the shell path with the aim that Python launches automatically when you type “Python”.  Example: If Python is compiled in /usr/local/bin/, there will be a need to prepend the $PATH variable to make sure it comes before an older version of Python.

 

  1. Install Setuptools:  Setuptools

 

  1. Lastly, download the Net-SNMP and configure it, including the “–with-python-modules” flag and in addition to other flags as required by your operating system.

 

./configure --with-python-modules

 

Check the code of the following module and then run it.

 

Example 3:

Multiprocessing SNMP:

import multiprocessing
import sys
import os
from time import sleep
from netsnmp import Varbind,snmpget
class recordHost():
def init(self):
self.name = None
self.request = None

def set_record(self,name,request):
self.name = name
self.request = request

class session():
def init(self):
#default values
self.id = 'sysDescr'
self.dest = 'localhost'
self.rec = recordHost()
self.rec.name = self.dest
self.vers = 2
self.commun = 'public'
self.snmp_var = None
self.verbose = True

def set_variables(self,c_id,dest,vers,commun,verbose):
self.id = c_id
self.dest = dest
self.rec = recordHost()
self.rec.name = self.dest
self.vers = vers
self.commun = commun
self.snmp_var = Varbind(self.id,0)
self.verbose = verbose

def snmp_query(self):
info = snmpget(self.snmp_var,self.versself.dest,self.commun)
#set record
self.rec.request = info
return self.rec

def request_query(client):
if isinstance(client,session):
return client.snmp_query()
#create instance
else:
ses = session()
ses.set_variables('sysDescr',client,2,'public',True)
return ses.snmp_query()

def create_thread(inp, out):
for ip in iter(inp.get, 'STOP'):
#send request
re = request_query(ip)
#set output
out.put(re)

def submit_task(client_arr,inp_que):
for client in client_arr:
inp_que.put(client)
return inp_que

if __name__ == '__main__':
#client aray
client_arr = ["localhost", "localhost"]
#Create queues
inp_que = multiprocessing.Queue()
res_que = multiprocessing.Queue()
#submit tasks
inp_que = submit_task(client_arr,inp_que)

for proc in range(len(client_arr)):
thread = multiprocessing.Process(target=create_thread, args=(inp_que,res_que))
thread.start()

#final results
print('Results : ')
for i in range(len(client_arr)):
res = res_que.get().
print(res_que.get().query)

#stop processes
print('Stop child process ')
for i in range(len(client_arr)):
res_que.put('STOP')

Here, the SNMP session carries a method representing the query with the help of the SNMP library, Net-SNMP. Since the call will block automatically, you can then import the multiprocessing library and run it in your operating system easily. This process is similar to that of threading API.

Special attention to the hosts list should be paid in section #clients. Now, you’d be able to potentially run asynchronous SNMP queries to 50 or 100 hosts, or more, depending on what hardware you are running on. Lastly, the two sections take the results from the queue and then put a “STOP” message into the queue.

Configuration of OS X’s SNMPD:

To configure OS X ‘s SNMP Daemon for testing, you need to re-write the configuration file with the help of following three commands on the shell:

This command will back up your configuration, making a new configuration followed by restarting the SNMP Daemon.

For the permanent run of OS X SNMP Daemon, the following edit in the list file is needed: 

/System/Library/Launch/org.net-snmp.snmpd.plist

 

Conclusion

It can be concluded that there are specific notes in the official documentation that are required to be kept in mind: try and avoid shared state, it is best to distinctly join processes that you create, try to eliminate processes with shared states and finally, ensure that all the queue items are cleared or removed before you join since deadlock can occur.

Multiprocessing is authoritarian and a new extension in the Python Programming language. While there are a few limitations of Global Interpreter lock (GDL) with threading which can become a weakness, Python has more than that made up for it, which is powerful yet a new addition to Python language. To understand the intricacies and concepts of Python you can check out the Professional Program in Full Stack Software Engineering course that provides in-depth knowledge. 

 



PGP in Full Stack Product Engineering

Become an Industry-Ready StackRoute Certified Full Stack Product Engineer who can take up Product Development and Digital Transformation projects. Job Assured Program* with a minimum CTC of ₹7LPA*

Job Assured Program*

Easy Financing Options

Top