Urllib
1.什么是互联网爬虫?
如果我们把互联网比作一张大的蜘蛛网,那一台计算机上的数据便是蜘蛛网上的一个猎物,而爬虫程序就是一只小蜘蛛,沿着蜘蛛网抓取自己想要的数据
解释1:通过一个程序,根据Url(http://www.taobao.com)进行爬取网页,获取有用信息
解释2:使用程序模拟浏览器,去向服务器发送请求,获取响应信息
2.爬虫核心?
1.爬取网页:爬取整个网页 包含了网页中所有得内容
2.解析数据:将网页中你得到的数据 进行解析
3.难点:爬虫和反爬虫之间的博弈
3.爬虫的用途?
数据分析/人工数据集
社交软件冷启动
舆情监控
竞争对手监控
4.爬虫分类?
5.反爬手段?
1.User‐Agent:
6.urllib库使用
使用urllib来获取百度首页的源码
# 使用urllib来获取百度首页的源码
import urllib.request# (1)定义一个url 就是你要访问的地址
url = 'http://www.baidu.com'# (2)模拟浏览器向服务器发送请求 response响应
response = urllib.request.urlopen(url)# (3)获取响应中的页面的源码 content 内容的意思
# read方法 返回的是字节形式的二进制数据
# 我们要将二进制的数据转换为字符串
# 二进制--》字符串 解码 decode('编码的格式')
content = response.read().decode('utf-8')# (4)打印数据
print(content)
一个类型 HTTPResponse
import urllib.requesturl = 'http://www.baidu.com'# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(url)# 一个类型和六个方法
# response是HTTPResponse的类型
# print(type(response))
六个方法 read readline readlines getcode geturl getheaders
视频的地址;
下载爬取的东西
import urllib.request# 下载网页
# url_page = 'http://www.baidu.com'# url代表的是下载的路径 filename文件的名字
# 在python中 可以变量的名字 也可以直接写值
# urllib.request.urlretrieve(url_page,'baidu.html')# 下载图片
# url_img = 'https://img1.baidu.com/it/u=3004965690,4089234593&fm=26&fmt=auto&gp=0.jpg'
#
# urllib.request.urlretrieve(url= url_img,filename='lisa.jpg')# 下载视频
url_video = 'https://vd3.bdstatic.com/mda-mhkku4ndaka5etk3/1080p/cae_h264/1629557146541497769/mda-mhkku4ndaka5etk3.mp4?v_from_s=hkapp-haokan-tucheng&auth_key=1629687514-0-0-7ed57ed7d1168bb1f06d18a4ea214300&bcevod_channel=searchbox_feed&pd=1&pt=3&abtest='urllib.request.urlretrieve(url_video,'hxekyyds.mp4')
7.请求对象的定制
url的组成
https://www.baidu.com/s?wd=周杰伦
http/https www.baidu.com 80/443 s wd = 周杰伦 #
协议 主机 端口号 路径 参数 锚点
常见端口号: http 80 https 443
mysql 3306 oracle 1521 redis 6379 mongodb 27017
找ua
import urllib.request
url = 'https://www.baidu.com'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
# 因为urlopen方法中不能存储字典 所以headers不能传递进去
response = urllib.request.urlopen(url=url,headers=headers)
content = response.read().decode('utf8')
print(content)
因为urlopen方法中不能存储字典 所以headers不能传递进去
import urllib.requesturl = 'https://www.baidu.com'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}# 请求对象的定制
# 这种参数多的要加(url= ,headers= ),不然运行不了识别不了哪个是哪个参数
request = urllib.request.Request(url=url,headers=headers)response = urllib.request.urlopen(request)
content = response.read().decode('utf8')print(content)
8.编解码
1.get请求:urllib.parse.quote()
获取 https://www.baidu.com/s?wd=周杰伦的网页源码
# https://www.baidu.com/s?wd=%E5%91%A8%E6%9D%B0%E4%BC%A6
import urllib.request
import urllib.parse
url = 'https://www.baidu.com/s?wd='
# 请求对象的定制为了解决反爬的第一种手段
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}# 将周杰伦三个字变成unicode编码的格式
# 我们需要依赖于urllib.parse
name = urllib.parse.quote('周杰伦')
url = url + name# 请求对象的定制
request = urllib.request.Request(url=url,headers=headers)
# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)
# 获取响应的内容
content = response.read().decode('utf-8')# 打印数据
print(content)
失败
<!DOCTYPE html>
<html lang="zh-CN">
<head><meta charset="utf-8"><title>百度安全验证</title><meta http-equiv="Content-Type" content="text/html; charset=utf-8"><meta name="apple-mobile-web-app-capable" content="yes"><meta name="apple-mobile-web-app-status-bar-style" content="black"><meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"><meta name="format-detection" content="telephone=no, email=no"><link rel="shortcut icon" href="https://www.baidu.com/favicon.ico" type="image/x-icon"><link rel="icon" sizes="any" mask href="https://www.baidu.com/img/baidu.svg"><meta http-equiv="X-UA-Compatible" content="IE=Edge"><meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests"><link rel="stylesheet" href="https://ppui-static-wap.cdn.bcebos.com/static/touch/css/api/mkdjump_aac6df1.css" />
</head>
<body><div class="timeout hide-callback"><div class="timeout-img"></div><div class="timeout-title">网络不给力,请稍后重试</div><button type="button" class="timeout-button">返回首页</button></div><div class="timeout-feedback hide-callback"><div class="timeout-feedback-icon"></div><p class="timeout-feedback-title">问题反馈</p></div><script src="https://ppui-static-wap.cdn.bcebos.com/static/touch/js/mkdjump_v2_21d1ae1.js"></script>
</body>
</html>进程已结束,退出代码0
加个Cookie成功!
【爬虫】如何解决爬虫爬取图片时遇到百度安全验证的问题?即页面上没有显示图片的源地址,没有img标签,只有div标签
# https://www.baidu.com/s?wd=%E5%91%A8%E6%9D%B0%E4%BC%A6
# 需求 获取 https://www.baidu.com/s?wd=周杰伦的网页源码
import urllib.request
import urllib.parse
url = 'https://www.baidu.com/s?wd='# 请求对象的定制为了解决反爬的第一种手段
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 SLBrowser/8.0.1.5162 SLBChan/105','Cookie':'',#cookie你先自己登录百度帐号就有了# 'Accept':'image/avif,image/webp,image/apng,image/svg+xml,image/*,*/*;q=0.8',# 'Accept-Encoding':'gzip, deflate, br',# 请求头使用了Accept-Encoding ,获取到的内容为压缩后的内容,使得后面解码utf错误# 'Accept-Language':'zh-CN,zh;q=0.9'
}# 将周杰伦三个字变成unicode编码的格式
# 我们需要依赖于urllib.parse
name = urllib.parse.quote('周杰伦')
url = url + name# 请求对象的定制
request = urllib.request.Request(url=url,headers=headers)
# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)
# 获取响应的内容
content = response.read().decode('utf-8')# 打印数据
print(content)
get:urllib.parse.urlencode() 多参数
urlencode应用场景:多个参数的时候
#https://www.baidu.com/s?wd=周杰伦&sex=男
import urllib.parse
data = {'wd':'周杰伦','sex':'男','location':'中国台湾省'
}
a = urllib.parse.urlencode(data)
print(a)
获取https://www.baidu.com/s?wd=%E5%91%A8%E6%9D%B0%E4%BC%A6&sex=%E7%94%B7的网页源码
加上Cookie成功!
import urllib.request
import urllib.parsebase_url = 'https://www.baidu.com/s?'data = {'wd':'周杰伦','sex':'男','location':'中国台湾省'
}
new_data = urllib.parse.urlencode(data)# 请求资源路径
url = base_url + new_data
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36','Cookie':'',
}# 请求对象的定制
request = urllib.request.Request(url=url,headers=headers)
# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)
# 获取网页源码的数据
content = response.read().decode('utf-8')# 打印数据
print(content)
post请求
百度翻译
翻译 发送了很多请求,找获取数据的 接口
post请求方式的参数 必须编码 data = urllib.parse.urlencode(data)
编码之后 必须调用encode方法 data = urllib.parse.urlencode(data).encode('utf-8')
参数是放在请求对象定制的方法中
request = urllib.request.Request (url=url,data=data, headers=headers )
import urllib.request
import urllib.parseurl = 'https://fanyi.baidu.com/sug'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}
data = {'kw':'spider'
}# post请求的参数 必须要进行编码
data = urllib.parse.urlencode(data).encode('utf-8')# post的请求的参数 是不会拼接在url的后面的 而是需要放在请求对象定制的参数中
# post请求的参数 必须要进行编码.encode('utf-8')
request = urllib.request.Request(url=url,data=data,headers=headers)# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)# 获取响应的数据
content = response.read().decode('utf-8')# 字符串--》json对象
import jsonobj = json.loads(content)
print(obj)
结果:
post和get区别
百度详细翻译
import urllib.request
import urllib.parseurl = 'https://fanyi.baidu.com/v2transapi?from=en&to=zh'headers = {# 'Accept': '*/*',# 'Accept-Encoding': 'gzip, deflate, br',上面这行 接收的编码格式一定要注释掉# 'Accept-Language': 'zh-CN,zh;q=0.9',# 'Connection': 'keep-alive',# 'Content-Length': '135',# 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',只要Cookie其实就行'Cookie': 'BIDUPSID=DAA8F9F0BD801A2929D96D69CF7EBF50; PSTM=1597202227; BAIDUID=DAA8F9F0BD801A29B2813502000BF8E9:SL=0:NR=10:FG=1; __yjs_duid=1_c19765bd685fa6fa12c2853fc392f8db1618999058029; REALTIME_TRANS_SWITCH=1; FANYI_WORD_SWITCH=1; HISTORY_SWITCH=1; SOUND_SPD_SWITCH=1; SOUND_PREFER_SWITCH=1; BDUSS=R2bEZvTjFCNHQxdUV-cTZ-MzZrSGxhbUYwSkRkUWk2SkxxS3E2M2lqaFRLUlJoRVFBQUFBJCQAAAAAAAAAAAEAAAA3e~BTveK-9sHLZGF5AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFOc7GBTnOxgaW; BDUSS_BFESS=R2bEZvTjFCNHQxdUV-cTZ-MzZrSGxhbUYwSkRkUWk2SkxxS3E2M2lqaFRLUlJoRVFBQUFBJCQAAAAAAAAAAAEAAAA3e~BTveK-9sHLZGF5AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFOc7GBTnOxgaW; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BAIDUID_BFESS=DAA8F9F0BD801A29B2813502000BF8E9:SL=0:NR=10:FG=1; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; PSINO=2; H_PS_PSSID=34435_31660_34405_34004_34073_34092_26350_34426_34323_22158_34390; delPer=1; BA_HECTOR=8185a12020018421b61gi6ka20q; BCLID=10943521300863382545; BDSFRCVID=boDOJexroG0YyvRHKn7hh7zlD_weG7bTDYLEOwXPsp3LGJLVJeC6EG0Pts1-dEu-EHtdogKK0mOTHv8F_2uxOjjg8UtVJeC6EG0Ptf8g0M5; H_BDCLCKID_SF=tR3aQ5rtKRTffjrnhPF3-44vXP6-hnjy3bRkX4Q4Wpv_Mnndjn6SQh4Wbttf5q3RymJ42-39LPO2hpRjyxv4y4Ldj4oxJpOJ-bCL0p5aHl51fbbvbURvD-ug3-7qqU5dtjTO2bc_5KnlfMQ_bf--QfbQ0hOhqP-jBRIE3-oJqC8hMIt43f; BCLID_BFESS=10943521300863382545; BDSFRCVID_BFESS=boDOJexroG0YyvRHKn7hh7zlD_weG7bTDYLEOwXPsp3LGJLVJeC6EG0Pts1-dEu-EHtdogKK0mOTHv8F_2uxOjjg8UtVJeC6EG0Ptf8g0M5; H_BDCLCKID_SF_BFESS=tR3aQ5rtKRTffjrnhPF3-44vXP6-hnjy3bRkX4Q4Wpv_Mnndjn6SQh4Wbttf5q3RymJ42-39LPO2hpRjyxv4y4Ldj4oxJpOJ-bCL0p5aHl51fbbvbURvD-ug3-7qqU5dtjTO2bc_5KnlfMQ_bf--QfbQ0hOhqP-jBRIE3-oJqC8hMIt43f; Hm_lvt_64ecd82404c51e03dc91cb9e8c025574=1629701482,1629702031,1629702343,1629704515; Hm_lpvt_64ecd82404c51e03dc91cb9e8c025574=1629704515; __yjs_st=2_MDBkZDdkNzg4YzYyZGU2NTM5NzBjZmQ0OTZiMWRmZGUxM2QwYzkwZTc2NTZmMmIxNDJkYzk4NzU1ZDUzN2U3Yjc4ZTJmYjE1YTUzMTljYWFkMWUwYmVmZGEzNmZjN2FlY2M3NDAzOThhZTY5NzI0MjVkMmQ0NWU3MWE1YTJmNGE5NDBhYjVlOWY3MTFiMWNjYTVhYWI0YThlMDVjODBkNWU2NjMwMzY2MjFhZDNkMzVhNGMzMGZkMWY2NjU5YzkxMDk3NTEzODJiZWUyMjEyYTk5YzY4ODUyYzNjZTJjMGM5MzhhMWE5YjU3NTM3NWZiOWQxNmU3MDVkODExYzFjN183XzliY2RhYjgz; ab_sr=1.0.1_ZTc2ZDFkMTU5ZTM0ZTM4MWVlNDU2MGEzYTM4MzZiY2I2MDIxNzY1Nzc1OWZjZGNiZWRhYjU5ZjYwZmNjMTE2ZjIzNmQxMTdiMzIzYTgzZjVjMTY0ZjM1YjMwZTdjMjhiNDRmN2QzMjMwNWRhZmUxYTJjZjZhNTViMGM2ODFlYjE5YTlmMWRjZDAwZGFmMDY4ZTFlNGJiZjU5YzE1MGIxN2FiYTU3NDgzZmI4MDdhMDM5NTQ0MjQxNDBiNzdhMDdl',# 'Host': 'fanyi.baidu.com',# 'Origin': 'https://fanyi.baidu.com',# 'Referer': 'https://fanyi.baidu.com/?aldtype=16047',# 'sec-ch-ua': '"Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"',# 'sec-ch-ua-mobile': '?0',# 'Sec-Fetch-Dest': 'empty',# 'Sec-Fetch-Mode': 'cors',# 'Sec-Fetch-Site': 'same-origin',# 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',# 'X-Requested-With': 'XMLHttpRequest',
}data = {'from': 'en','to': 'zh','query': 'love','transtype': 'realtime','simple_means_flag': '3','sign': '198772.518981','token': '5483bfa652979b41f9c90d91f3de875d','domain': 'common',
}
# post请求的参数 必须进行编码 并且要调用encode方法
data = urllib.parse.urlencode(data).encode('utf-8')# 请求对象的定制
request = urllib.request.Request(url = url,data = data,headers = headers)# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)# 获取响应的数据
content = response.read().decode('utf-8')import jsonobj = json.loads(content)
print(obj)
9.ajax的get请求
案例:爬取豆瓣电影前10页数据
获取豆瓣电影的第一页的数据 并且保存起来
找接口
# get请求
import urllib.requesturl = 'https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&start=0&limit=20'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}# (1) 请求对象的定制
request = urllib.request.Request(url=url,headers=headers)# (2)获取响应的数据
response = urllib.request.urlopen(request)
content = response.read().decode('utf-8')# (3) 数据下载到本地
# open方法默认情况下使用的是gbk的编码
# 如果我们要想保存汉字 那么需要在open方法中指定编码格式为utf-8
# encoding = 'utf-8'
# 22 23行和 25 26行效果一样
# fp = open('douban.json','w',encoding='utf-8')
# fp.write(content)with open('douban1.json','w',encoding='utf-8') as fp:fp.write(content)
下载豆瓣电影前10页的数据
# https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&start=0&limit=20
第二页接口
# https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&start=20&limit=20
# https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&start=40&limit=20
# https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&start=60&limit=20
page 1 2 3 4
start 0 20 40 60
规律: start数值其实= (page - 1)*20
# 下载豆瓣电影前10页的数据
# (1) 请求对象的定制
# (2) 获取响应的数据
# (3) 下载数据import urllib.parse
import urllib.request# 每一页都有自己的请求对象的定制
def create_request(page):base_url = 'https://movie.douban.com/j/chart/top_list?type=5&interval_id=100%3A90&action=&'data = {'start':(page - 1) * 20,'limit':20}data = urllib.parse.urlencode(data) url = base_url + dataheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'}request = urllib.request.Request(url=url,headers=headers)return request# 获取响应的数据
def get_content(request):response = urllib.request.urlopen(request)content = response.read().decode('utf-8')return content# 下载
def down_load(page,content):with open('douban_' + str(page) + '.json','w',encoding='utf-8')as fp:fp.write(content)# 程序的入口
if __name__ == '__main__':start_page = int(input('请输入起始的页码'))end_page = int(input('请输入结束的页面'))for page in range(start_page,end_page+1):# 左闭右开
# 每一页都有自己的请求对象的定制request = create_request(page)
# 获取响应的数据content = get_content(request)
# 下载down_load(page,content)
10.ajax的post请求
案例:KFC官网 北京哪里有kfc&前十页数据
import urllib.request
import urllib.parsedef create_request(page):base_url = 'http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=cname'data = {'cname': '北京','pid':'','pageIndex': page,'pageSize': '10'}data = urllib.parse.urlencode(data).encode('utf-8')headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'}request = urllib.request.Request(url=base_url,headers=headers,data=data)return requestdef get_content(request):response = urllib.request.urlopen(request)content = response.read().decode('utf-8')return contentdef down_load(page,content):with open('kfc_' + str(page) + '.json','w',encoding='utf-8')as fp:fp.write(content)if __name__ == '__main__':start_page = int(input('请输入起始页码'))end_page = int(input('请输入结束页码'))for page in range(start_page,end_page+1):# 请求对象的定制request = create_request(page)# 获取网页源码content = get_content(request)# 下载down_load(page,content)
11.URLError\HTTPError
import urllib.request
import urllib.error#HTTPError
# url = 'https://blog.csdn.net/sulixu/article/details/1198189491'#URLError
url = 'http://www.doudan1111.com'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}try:request = urllib.request.Request(url = url, headers = headers)response = urllib.request.urlopen(request)content = response.read().decode('utf-8')print(content)
except urllib.error.HTTPError:print('系统正在升级。。。')
except urllib.error.URLError:print('我都说了 系统正在升级。。。')
12.cookie登录
适用的场景:数据采集的时候 需要绕过登陆 然后进入到某个页面
个人信息页面是utf-8 但是还报错了编码错误
因为并没有进入到个人信息页面 而是跳转到了登陆页面,而登陆页面不是utf-8 所以报错
什么情况下访问不成功? 因为请求头的信息不够 所以访问不成功
案例:weibo登陆
import urllib.requesturl = 'https://weibo.cn/6451491586/info'headers = {
# ':authority': 'weibo.cn',
# ':method': 'GET',
# ':path': '/6451491586/info',
# ':scheme': 'https',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
# 'accept-encoding': 'gzip, deflate, br',
'accept-language': 'zh-CN,zh;q=0.9',
'cache-control': 'max-age=0',cookie中携带着你的登陆信息 如果有登陆之后的cookie 那么我们就可以携带着cookie进入到任何页面
'cookie': '_T_WM=24c44910ba98d188fced94ba0da5960e; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9WFxxfgNNUmXi4YiaYZKr_J_5NHD95QcSh-pSh.pSKncWs4DqcjiqgSXIgvVPcpD; SUB=_2A25MKKG_DeRhGeBK7lMV-S_JwzqIHXVv0s_3rDV6PUJbktCOLXL2kW1NR6e0UHkCGcyvxTYyKB2OV9aloJJ7mUNz; SSOLoginState=1630327279',referer 判断当前路径是不是由上一个路径进来的 一般情况下 是做图片防盗链
'referer': 'https://weibo.cn/',
'sec-ch-ua': '"Chromium";v="92", " Not A;Brand";v="99", "Google Chrome";v="92"',
'sec-ch-ua-mobile': '?0',
'sec-fetch-dest': 'document',
'sec-fetch-mode': 'navigate',
'sec-fetch-site': 'same-origin',
'sec-fetch-user': '?1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',
}
# 请求对象的定制
request = urllib.request.Request(url=url,headers=headers)
# 模拟浏览器向服务器发送请求
response = urllib.request.urlopen(request)
# 获取响应的数据
content = response.read().decode('utf-8')# 将数据保存到本地
with open('weibo.html','w',encoding='utf-8')as fp:fp.write(content)
失败
作业:qq空间的爬取
13.Handler处理器
需求 使用handler来访问百度 获取网页源码
import urllib.requesturl = 'http://www.baidu.com'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}request = urllib.request.Request(url = url,headers = headers)# handler build_opener open# (1)获取hanlder对象
handler = urllib.request.HTTPHandler()# (2)获取opener对象
opener = urllib.request.build_opener(handler)# (3) 调用open方法
response = opener.open(request)content = response.read().decode('utf-8')print(content)
14. 代理服务器
免费的用不了,买一个
查ip地址
import urllib.requesturl = 'http://www.baidu.com/s?wd=ip'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}# 请求对象的定制
request = urllib.request.Request(url = url,headers= headers)# 模拟浏览器访问服务器
# response = urllib.request.urlopen(request)# 代理ip
proxies = {'http':'118.24.219.151:16817'
}
# handler build_opener open
handler = urllib.request.ProxyHandler(proxies = proxies)opener = urllib.request.build_opener(handler)response = opener.open(request)# 获取响应的信息
content = response.read().decode('utf-8')# 保存
with open('daili.html','w',encoding='utf-8')as fp:fp.write(content)
代理池
import urllib.requestproxies_pool = [{'http':'118.24.219.151:16817'},{'http':'118.24.219.151:16817'},
]import random
# 随机从代理池选择
proxies = random.choice(proxies_pool)url = 'http://www.baidu.com/s?wd=ip'headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'
}request = urllib.request.Request(url = url,headers=headers)handler = urllib.request.ProxyHandler(proxies=proxies)opener = urllib.request.build_opener(handler)response = opener.open(request)content = response.read().decode('utf-8')with open('daili.html','w',encoding='utf-8')as fp:fp.write(content)
P68看完了
本文链接:https://my.lmcjl.com/post/7341.html
4 评论