KISS == Keep It Simple,Stupid
不论什么,坚持简单的过程/原理/结构/代码,就是自在!
现在小白想使用多线程来提高查询的速度,就用threading模块了!
# -*- coding: utf-8 -*- import os import time from threading import Thread from configparser import RawConfigParser as rcp class grepIt(Thread): def __init__(self, cdcfile, keyword): Thread.__init__(self) self.cdcf = cdcfile self.keyw = keyword.upper() self.report = "" def run(self): if self.cdcf.endswith('.ini'): self.report = marklni(self.cdcf, self.keyw) def marklni(cdcfile, keyword): """配置文件模式匹配函数 """ report = "" keyw = keyword.upper() cfg = rcp() cfg.read(cdcfile) nodelist = cfg.sections() nodelist.remove("Comment") nodelist.remove("Info") for node in nodelist: if keyw in node.upper(): print(node) report += "\n %s" % node continue else: for item in cfg.items(node): if keyw in item[0].upper(): report += "\n %s\%s" % (node, item) return report def grepSearch(cdcpath, keyword): """多线程群体搜索函数 """ begin = time.time() filelist = os.listdir(cdcpath) # 用于记录发起的搜索线程 searchlist = [] for cdcf in filelist: pathcdcf = "%s\%s" % (cdcpath, cdcf) #print(pathcdcf) # 初始化线程对象 current = grepIt(pathcdcf, keyword) # 追加记录线程队列 searchlist.append(current) # 发动线程处理 current.start() for searcher in searchlist: searcher.join() print("Search from", searcher.cdcf, "out", searcher.report) print("usage %s s" % (time.time() - begin)) if __name__ == "__main__": grepSearch("F:\\back", "images")
针对执行效率的重构,就像玩头脑急转弯,只要充分直觉地分析运行时的整体软件行为,很容易确定瓶颈。
对问题有了准确的理解后,再向行者请教时便有了确切方向,进而容易获得有效的提示。
小练习:
利用Lock和RLock实现线程间的简单同步,使得10个线程对同一共享变量进行递增操作,使用加锁机制保证变量结果的正确
# -*- coding: utf-8 -*- import threading, time class mythread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): global n if lock.acquire(): print('Thread:',n) n += 1 lock.release() n = 0 t = [] lock = threading.Lock() for i in range(10): my = mythread() t.append(my) for i in range(10): t[i].start() for i in range(10): t[i].join()
运行结果如下:
Thread: 0 Thread: 1 Thread: 2 Thread: 3 Thread: 4 Thread: 5 Thread: 6 Thread: 7 Thread: 8 Thread: 9
原文:http://blog.51cto.com/9473774/2091399