疯狂下载小视频

2018-04-12 13:25:28 62 14195 1
0x01 分析网站视频
我接触的小视频网站分类两类
A: 播放器对视频进行加密过的
B:直接mp4的播放(这一类其实很多很多)

对A类视频探测下载地址的方式:
直接使用浏览器自带的插件, 找到network选项  使用常用的 视频格式进行过滤, 如下图所示的分析方式

直接可以探测到 下载地址
备注: 有时候经常会探测到 .ts的文件, ckplayer的播放器经常会遇到这种格式, 不要惊慌, 全部下载下来, 然后用格式工厂的工具,进行视频文件组合即可。


对B类的视频文件就简单了:
右键直接点击下载视频, 这个不讲解


0x02 批量下载
有时候太猥琐, 想把网站的视频全部下载下来。
给大家我用的思路与方法吧
拿到视频的下载地址以后
地址分为几类:
www.xxx.com/video/aa001.mp4
www.xxx.com/video/aa002.mp4
www.xxx.com/video/aa00*.mp4

这类的视频下载地址, 直接进行可以下载了,工具【迅雷】
两种方式



迅雷提供的另种批量下载的方式: 链接批量下载, 还有一种正则表达式的方式!

对于这类 没有对url进行加密的 ,批量扒下来。
类似这样:
www.xxx.com/video/9452ab54287bbccc989.mp4
一段没有任何规律的字符, 可以适当分析一下他的加密方式, 分析不出来就放弃吧。
对于这类的网站,给大家一个思路:
a:首先写一个爬虫脚本, 爬取所有播放页面的URL
b:写一个脚本,对每个页面的播放地址进行分析

这种思路, 基本可以解决所有的问题了。

0x03 效果展示




说明: 很简单的一种扒视频的方法, 只是给大家分享一下自己的思路,没啥技术含量。
TCV : 0

关于作者

busishen5篇文章47篇回复

对底层逆向感兴趣

评论62次

要评论?请先  登录  或  注册
  • TOP1
    2018-4-12 15:19

    #!/usr/bin/env python# -*- conding:utf-8 -*- import urllib.requestimport re,os,socket,base64url = base64.b64decode('aHR0cHM6Ly8yMDE3MTJtcDQuODlzb3NvLmNvbS8=').decode('utf-8')jpgs = ['1_1.jpg','1_2.jpg','1_3.jpg','1_4.jpg','1_5.jpg','1_6.jpg','1_7.jpg','1_8.jpg','1_9.jpg','1_10.jpg']urllist = ['20170102', '20170103', '20170104', '20170106', '20170107', '20170108', '20170109', '20170110', '20170111', '20170112', '20170113', '20170114', '20170115', '20170116', '20170220', '20170222', '20170223', '20170224', '20170225', '20170226', '20170227', '20170228', '20170301', '20170302', '20170303', '20170304', '20170305', '20170306', '20170307', '20170308', '20170309', '20170310', '20170311', '20170312', '20170316', '20170318', '20170322', '20170323', '20170324', '20170325', '20170326', '20170327', '20170329', '20170330', '20170331', '20170401', '20170402', '20170403', '20170404', '20170405', '20170406', '20170407', '20170408', '20170409', '20170410', '20170412', '20170413', '20170415', '20170416', '20170417', '20170418', '20170419', '20170420', '20170421', '20170422', '20170423', '20170424', '20170425', '20170426', '20170427', '20170428', '20170429', '20170430', '20170501', '20170502', '20170503', '20170504', '20170505', '20170506', '20170507', '20170508', '20170509', '20170510', '20170511', '20170512', '20170513', '20170514', '20170515', '20170516', '20170517', '20170518', '20170519', '20170520', '20170521', '20170522', '20170523', '20170524', '20170525', '20170526', '20170527', '20170528', '20170529', '20170530', '20170531', '20170601', '20170602', '20170603', '20170604', '20170605', '20170606', '20170607', '20170608', '20170609', '20170610', '20170611', '20170612', '20170613', '20170614', '20170615', '20170616', '20170617', '20170618', '20170619', '20170620', '20170621', '20170622', '20170623', '20170624', '20170625', '20170626', '20170627', '20170628', '20170629', '20170630', '20170701', '20170702', '20170704', '20170705', '20170706', '20170707', '20170708', '20170709', '20170710', '20170711', '20170712', '20170713', '20170714', '20170715', '20170716', '20170717', '20170718', '20170719', '20170720', '20170721', '20170722', '20170723', '20170724', '20170725', '20170726', '20170727', '20170728', '20170729', '20170730', '20170731', '20170801', '20170802', '20170803', '20170804', '20170805', '20170806', '20170807', '20170808', '20170809', '20170810', '20170811', '20170812', '20170813', '20170814', '20170815', '20170816', '20170817', '20170818', '20170819', '20170820', '20170821', '20170822', '20170823', '20170824', '20170825', '20170826', '20170827', '20170828', '20170829', '20170830', '20170831', '20170901', '20170902', '20170903', '20170904', '20170905', '20170906', '20170907', '20170908', '20170909', '20170910', '20170911', '20170912', '20170913', '20170914', '20170915', '20170916', '20170917', '20170918', '20170919', '20170920', '20170921', '20170922', '20170923', '20170924', '20170925', '20170926', '20170927', '20170928', '20170929', '20170930', '20171001', '20171002', '20171003', '20171004', '20171005', '20171006', '20171007', '20171008', '20171009', '20171010', '20171011', '20171012', '20171013', '20171014', '20171015', '20171016', '20171017', '20171018', '20171019', '20171020', '20171021', '20171022', '20171023', '20171024', '20171025', '20171026', '20171027', '20171028', '20171029', '20171030', '20171031', '20171101', '20171102', '20171103', '20171104', '20171105', '20171106', '20171107', '20171108', '20171109', '20171110', '20171111', '20171112', '20171113', '20171114', '20171115', '20171116', '20171117', '20171118', '20171119', '20171120', '20171121', '20171122', '20171123', '20171124', '20171125', '20171126', '20171127', '20171128', '20171129', '20171130', '20171201', '20171202', '20171203', '20171204', '20171205', '20171206', '20171207', '20171208', '20171209', '20171210', '20171211', '20171212', '20171213', '20171214', '20171215', '20171216', '20171225', '20171226', '20171227', '20171228', '20171229', '20171230', '20171231', '20180104', '20180105', '20180106', '20180107', '20180108', '20180109', '20180110', '20180111', '20180112', '20180113', '20180114', '20180115', '20180116', '20180117', '20180118', '20180119', '20180120', '20180121', '20180122', '20180123', '20180124', '20180125', '20180126', '20180127', '20180128', '20180129', '20180130', '20180131', '20180201', '20180202', '20180203', '20180204', '20180205', '20180206', '20180207', '20180208', '20180209', '20180210', '20180221', '20180222', '20180223', '20180224', '20180302', '20180303', '20180304', '20180305', '20180306', '20180307', '20180308', '20180309', '20180310', '20180311', '20180312', '20180313', '20180314', '20180315', '20180316', '20180317', '20180318', '20180319', '20180320', '20180321', '20180322', '20180323', '20180324', '20180325', '20180326', '20180327', '20180328', '20180329', '20180330', '20180331', '20180406', '20180407', '20180408', '20180409', '20180410']def jpg(url,j):  '''下载图片'''  if os.path.exists('jpg') == False:  os.mkdir('jpg')  os.mkdir('jpg\\%s'%(j))  for y in range(1, 24):  count = 1  os.mkdir('jpg\\%s\\%s' %(j,y))  for z in jpgs:    try:      k =url+"/"+str(y)+"/"+z      #print(k)      xmlurl = url+"/"+str(y)+"/"+"1/xml/index.xml"      if count == 10:      data = urllib.request.urlopen(xmlurl).read().decode()      redata = re.compile(r'http.*?\.mp4')      datas = redata.findall(data)      counts = 1      for i in datas:        with open("jpg\\%s\\%s\\mp4_%s.html" % (j, y,counts), 'a') as f:          f.write('<video src="%s" controls="controls"'%(i))          f.write(r" width='100%' height='100%'></video>")        counts +=1      print("视频以保存到目录下的html文件中")      socket.setdefaulttimeout(5)#下载5秒没反应 就跳过      urllib.request.urlretrieve(k,"jpg\\%s\\%s\\%s.jpg"%(j,y,count))      print("正在下载第%s组第%s张照片"%(y,count))      count += 1    except urllib.error.HTTPError as e:      print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count))      count += 1    except urllib.error.URLError as e:      print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count))      count += 1    except socket.timeout as e:      print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count))      count +=1for i in urllist:  j = url + i  jpg(j,i)

  • 62楼
    2021-8-31 22:16
    阿西木

    分分段视频,有没什么方法合并到一块

    1

    看后缀咯 如果是mp4 就cmd里面 copy /b *.mp4 mix.mp4

  • 61楼
    2018-6-15 16:18

    .ts的小的文件合并,这个点get到了。

  • 60楼
    2018-6-15 08:29

    百度 同名字的视频 找个是mp4的下载。

  • 59楼
    2018-4-19 12:54

    正好今天在测试优酷视频的时候发现,很多视频抓取下来的都是.ts的文件,原来可以直接用格式工厂搞... 还有Y2B的方便,抓取地址后可以直接在线转了之后再下载

  • 58楼
    2018-4-18 10:54

    用过了,还是感谢,另外#2和#10是个什么意思,发了也不说下,里面还有固定编码,还是下载图片,毛用?!

  • 57楼
    2018-4-18 09:06

    分分段视频,有没什么方法合并到一块

  • 56楼
    2018-4-17 22:57

    可以下点小视频游览一二,这个有点像去撸91的爆破方法

  • 55楼
    2018-4-17 20:41

    嘿嘿 可以可以 学xi到知识了。那拖那些网上收费的水平嘞。嘿嘿。

  • 54楼
    2018-4-17 14:25
    算命縖子

    #!/usr/bin/env python# -*- conding:utf-8 -*- import urllib.requestimport re,os,socket,base64url = base64.b64decode('aHR0cHM6Ly8yMDE3MTJtcDQuODlzb3NvLmNvbS8=').decode('utf-8')jpgs = urllist = def jpg(url,j): '''下载图片''' if os.path.exists('jpg') == False: os.mkdir('jpg') os.mkdir('jpg\\%s'%(j)) for y in range(1, 24): count = 1 os.mkdir('jpg\\%s\\%s' %(j,y)) for z in jpgs: try: k =url+"/"+str(y)+"/"+z #print(k) xmlurl = url+"/"+str(y)+"/"+"1/xml/index.xml" if count == 10: data = urllib.request.urlopen(xmlurl).read().decode() redata = re.compile(r'http.*?\.mp4') datas = redata.findall(data) counts = 1 for i in datas: with open("jpg\\%s\\%s\\mp4_%s.html" % (j, y,counts), 'a') as f: f.write('<video src="%s" controls="controls"'%(i)) f.write(r" width='100%' height='100%'></video>") counts +=1 print("视频以保存到目录下的html文件中") socket.setdefaulttimeout(5) #下载5秒没反应 就跳过 urllib.request.urlretrieve(k,"jpg\\%s\\%s\\%s.jpg"%(j,y,count)) print("正在下载第%s组第%s张照片"%(y,count)) count += 1 except urllib.error.HTTPError as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count += 1 except urllib.error.URLError as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count += 1 except socket.timeout as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count +=1for i in urllist: j = url + i jpg(j,i)

    1

    base编码蛮有意思的,哈哈。 下次倒是可以试试time模块。最起码代码不显的那么长~~~

  • 53楼
    2018-4-16 17:57

    师傅的思路很猥琐呀。。又学会了新车技。。

  • 52楼
    2018-4-16 16:28

    360上也有自带的工具,和这个思路应该差不多

  • 51楼
    2018-4-16 16:27

    url列目录漏洞。。直接wget -m不就好了

  • 50楼
    2018-4-16 15:21

    这个不错,刚好最近寂寞空虚冷,想下点小视频,哈哈

  • 49楼
    2018-4-16 13:42

    土司可以开个爬虫专题了。

  • 48楼
    2018-4-15 19:59

    你们都是老司机,硬盘里已经不缺这些东西了

  • 47楼
    2018-4-15 17:08

    火狐浏览器很早就有这总专门的插件了。 https://jingyan.baidu.com/article/20095761d6e467cb0721b400.html

  • 46楼
    2018-4-15 00:39
    算命縖子

    #!/usr/bin/env python# -*- conding:utf-8 -*- import urllib.requestimport re,os,socket,base64url = base64.b64decode('aHR0cHM6Ly8yMDE3MTJtcDQuODlzb3NvLmNvbS8=').decode('utf-8')jpgs = urllist = def jpg(url,j): '''下载图片''' if os.path.exists('jpg') == False: os.mkdir('jpg') os.mkdir('jpg\\%s'%(j)) for y in range(1, 24): count = 1 os.mkdir('jpg\\%s\\%s' %(j,y)) for z in jpgs: try: k =url+"/"+str(y)+"/"+z #print(k) xmlurl = url+"/"+str(y)+"/"+"1/xml/index.xml" if count == 10: data = urllib.request.urlopen(xmlurl).read().decode() redata = re.compile(r'http.*?\.mp4') datas = redata.findall(data) counts = 1 for i in datas: with open("jpg\\%s\\%s\\mp4_%s.html" % (j, y,counts), 'a') as f: f.write('<video src="%s" controls="controls"'%(i)) f.write(r" width='100%' height='100%'></video>") counts +=1 print("视频以保存到目录下的html文件中") socket.setdefaulttimeout(5) #下载5秒没反应 就跳过 urllib.request.urlretrieve(k,"jpg\\%s\\%s\\%s.jpg"%(j,y,count)) print("正在下载第%s组第%s张照片"%(y,count)) count += 1 except urllib.error.HTTPError as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count += 1 except urllib.error.URLError as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count += 1 except socket.timeout as e: print(e,"下载第%s组第%s张照片失败,下载还在继续!"%(y,count)) count +=1for i in urllist: j = url + i jpg(j,i)

    1

    这段是认真的吗?是不想让人看网址?

  • 45楼
    2018-4-14 23:31

    .ts格式,原来还可以用格式工厂的工具,进行视频文件组合。 第一次知道,学xi了

  • 44楼
    2018-4-14 23:11

    又一波小视频下载

  • 43楼
    2018-4-13 21:57

    ffmpeg可以合并.m3u8文件