首页 > 编程语言 > 详细

PythonGetWebTitle(批量获取网站标题)

时间:2020-03-09 09:23:17      阅读:516      评论:0      收藏:0      [点我收藏+]

最近写了批量get网站title的脚本,脚本很慢不实用,bug很多,慎用!主要是总结踩得坑,就当模块练习了。

源码如下:

import requests
import argparse
import re


class bcolors:
    HEADER = ‘\033[95m‘
    OKBLUE = ‘\033[94m‘
    OKGREEN = ‘\033[92m‘
    WARNING = ‘\033[93m‘
    FAIL = ‘\033[91m‘
    ENDC = ‘\033[0m‘
    BOLD = ‘\033[1m‘
    UNDERLINE = ‘\033[4m‘

def parser_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("-f","--file",help="指定domain文件")
    # parser.add_argument("-f","--file",help="指定domain文件",action="store_true")  不可控
    return parser.parse_args()

def httpheaders(url):
    proxies = {
    ‘http‘: ‘http://127.0.0.1:8080‘
    }
    headers = {
        ‘Connection‘: ‘close‘,
        ‘Upgrade-Insecure-Requests‘: ‘1‘,
        ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36‘,
        ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8‘,
        ‘Accept-Encoding‘: ‘gzip, deflate, sdch, br‘,
        ‘Accept-Language‘: ‘zh-CN,zh;q=0.8‘,
    }
    requests.packages.urllib3.disable_warnings()
    res = requests.get(url, proxies=proxies, headers=headers, timeout=10, verify=False)
    res.encoding = res.apparent_encoding
    head = res.headers
    # print(‘[+]url:‘+url,‘ ‘+‘Content-Type:‘+head[‘Content-Type‘])
    title = re.findall("<title>(.*)</title>", res.text, re.IGNORECASE)[0].strip()
    print(bcolors.OKGREEN+‘[+]url:‘+url,‘ ‘+‘title:‘+title+‘  length:‘+head[‘Content-Length‘]+bcolors.ENDC)
    
def fileopen(filename):
    with open(filename,‘r‘) as obj:
        for adomain in obj.readlines():
            adomain = adomain.rstrip(‘\n‘)            
            try:
                httpheaders(adomain)
            except Exception as e:
                print(bcolors.WARNING +‘[+]‘+adomain+"   Connect refuse"+bcolors.ENDC)
                
if __name__ == "__main__":
    try:
        abc = vars(parser_args())
        a = abc[‘file‘]
        fileopen(a)
    except FileNotFoundError as e:
        print(‘目录下无该文件‘+a)

 

本次踩到的坑:

1.使用Python3 requests发送HTTPS请求,已经关闭认证(verify=False)情况下,控制台会输出以下错误:

 

InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

命令行看到此消息,强迫症会很难受。

解决办法:在执行code前加入 requests.packages.urllib3.disable_warnings()

 

2.关于argparse模块的用法:

def parser_args():
    parser = argparse.ArgumentParser()  #创建ArgumentParser()对象
    parser.add_argument("-f","--file",help="指定domain文件") #主要看第二个参数,--file将file传作key值,例如:-f aaa,那么file的value为‘aaa‘
    # parser.add_argument("-f","--file",help="指定domain文件",action="store_true")  不可控
    return parser.parse_args()

PythonGetWebTitle(批量获取网站标题)

原文:https://www.cnblogs.com/devapath/p/12446259.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!