濟(jì)南網(wǎng)站建設(shè) 薦搜點(diǎn)網(wǎng)絡(luò)廣告聯(lián)盟接單賺錢平臺(tái)
Python多線程編程中常用方法:
1、join()方法:如果一個(gè)線程或者在函數(shù)執(zhí)行的過程中調(diào)用另一個(gè)線程,并且希望待其完成操作后才能執(zhí)行,那么在調(diào)用線程的時(shí)就可以使用被調(diào)線程的join方法join([timeout]) timeout:可選參數(shù),線程運(yùn)行的最長時(shí)間
2、isAlive()方法:查看線程是否還在運(yùn)行
3、getName()方法:獲得線程名
4、setDaemon()方法:主線程退出時(shí),需要子線程隨主線程退出,則設(shè)置子線程的setDaemon()
Python線程同步:
(1)Thread的Lock和RLock實(shí)現(xiàn)簡單的線程同步:
import threading
import time
class mythread(threading.Thread):def __init__(self,threadname):threading.Thread.__init__(self,name=threadname)def run(self):global xlock.acquire()for i in range(3):x = x+1time.sleep(1)print xlock.release()if __name__ == '__main__':lock = threading.RLock()t1 = []for i in range(10):t = mythread(str(i))t1.append(t)x = 0for i in t1:i.start()
(2)使用條件變量保持線程同步:
# coding=utf-8
import threadingclass Producer(threading.Thread):def __init__(self,threadname):threading.Thread.__init__(self,name=threadname)def run(self):global xcon.acquire()if x == 10000:con.wait() passelse:for i in range(10000):x = x+1con.notify()print xcon.release()class Consumer(threading.Thread):def __init__(self,threadname):threading.Thread.__init__(self,name=threadname)def run(self):global xcon.acquire()if x == 0:con.wait()passelse:for i in range(10000):x = x-1con.notify()print xcon.release()if __name__ == '__main__':con = threading.Condition()x = 0p = Producer('Producer')c = Consumer('Consumer')p.start()c.start()p.join()c.join()print x
(3)使用隊(duì)列保持線程同步:
# coding=utf-8
import threading
import Queue
import time
import randomclass Producer(threading.Thread):def __init__(self,threadname):threading.Thread.__init__(self,name=threadname)def run(self):global queuei = random.randint(1,5)queue.put(i)print self.getName(),' put %d to queue' %(i)time.sleep(1)class Consumer(threading.Thread):def __init__(self,threadname):threading.Thread.__init__(self,name=threadname)def run(self):global queueitem = queue.get()print self.getName(),' get %d from queue' %(item)time.sleep(1)if __name__ == '__main__':queue = Queue.Queue()plist = []clist = []for i in range(3):p = Producer('Producer'+str(i))plist.append(p)for j in range(3):c = Consumer('Consumer'+str(j))clist.append(c)for pt in plist:pt.start()pt.join()for ct in clist:ct.start()ct.join()
生產(chǎn)者消費(fèi)者模式的另一種實(shí)現(xiàn):
# coding=utf-8
import time
import threading
import Queueclass Consumer(threading.Thread):def __init__(self, queue):threading.Thread.__init__(self)self._queue = queuedef run(self):while True:# queue.get() blocks the current thread until an item is retrieved.msg = self._queue.get()# Checks if the current message is the "quit"if isinstance(msg, str) and msg == 'quit':# if so, exists the loopbreak# "Processes" (or in our case, prints) the queue itemprint "I'm a thread, and I received %s!!" % msg# Always be friendly!print 'Bye byes!'class Producer(threading.Thread):def __init__(self, queue):threading.Thread.__init__(self)self._queue = queuedef run(self):# variable to keep track of when we startedstart_time = time.time()# While under 5 seconds..while time.time() - start_time < 5:# "Produce" a piece of work and stick it in the queue for the Consumer to processself._queue.put('something at %s' % time.time())# Sleep a bit just to avoid an absurd number of messagestime.sleep(1)# This the "quit" message of killing a thread.self._queue.put('quit')if __name__ == '__main__':queue = Queue.Queue()consumer = Consumer(queue)consumer.start()producer1 = Producer(queue)producer1.start()
使用線程池(Thread pool)+同步隊(duì)列(Queue)的實(shí)現(xiàn)方式:
# A more realistic thread pool example
# coding=utf-8
import time
import threading
import Queue
import urllib2 class Consumer(threading.Thread): def __init__(self, queue):threading.Thread.__init__(self)self._queue = queue def run(self):while True: content = self._queue.get() if isinstance(content, str) and content == 'quit':breakresponse = urllib2.urlopen(content)print 'Bye byes!'def Producer():urls = ['http://www.python.org', 'http://www.yahoo.com''http://www.scala.org', 'http://cn.bing.com'# etc.. ]queue = Queue.Queue()worker_threads = build_worker_pool(queue, 4)start_time = time.time()# Add the urls to processfor url in urls: queue.put(url) # Add the 'quit' messagefor worker in worker_threads:queue.put('quit')for worker in worker_threads:worker.join()print 'Done! Time taken: {}'.format(time.time() - start_time)def build_worker_pool(queue, size):workers = []for _ in range(size):worker = Consumer(queue)worker.start() workers.append(worker)return workersif __name__ == '__main__':Producer()
另一個(gè)使用線程池+Map的實(shí)現(xiàn):
import urllib2
from multiprocessing.dummy import Pool as ThreadPool urls = ['http://www.python.org', 'http://www.python.org/about/','http://www.python.org/doc/','http://www.python.org/download/','http://www.python.org/community/']# Make the Pool of workers
pool = ThreadPool(4)
# Open the urls in their own threads
# and return the results
results = pool.map(urllib2.urlopen, urls)
#close the pool and wait for the work to finish
pool.close()
pool.join()