- Change some command line syntax:
- Download by Tags (3): wilcard[y/n], start page, end page, tags
- Download by Title Caption (9): start page, end page, title/captions
- Download by Tags and MemberId (10): MemberId, start page, end page, tags
- Download by Group ID (12): GroupId, limit, process external[y/n]
- Fix trailing space before extensions in filename
- Add SOCKS proxy support, use socks5:// or socks4:// format.
- Add temporary fix for datetime parsing (pixiv server bug).
Download link for pixiv downloader 20130927, source code in GitHub.
Donation link on the sidebar :D.
EDIT: Download Link updated, also here is my mediafire folder for all the releases.
15 thoughts on “pixiv downloader 20130927”
Pixiv has probably changed something about their pages or they’re bugging out, but the downloader is fetching every page with the title of “pixiv Premium is the best way to enjoy what pixiv has to offer!”
The title shows up in the prompt with every fetched image and if you have the filenames to save the title as well, in every saved picture. This happens to me on OS X, with a build that has previously worked just fine and on my Linux server.
Try to get the latest source code in GitHub. Tried in my pc, I can parse the title correctly.
I ran ‘git pull’ on both machines and they’re both up-to-date. Yet, they both parse the title exactly the same way.
Here’s the output, if you’re interested: http://i.imgur.com/VzRNCTQ.png
Weird, mine is parsed correctly. I’ve tried to use the same member_id (1006311) with you.
Can you upload the actual page being downloaded from your linux shell? What is your version of python and BeautifulSoup library? Mine is python 2.72 and BeautifulSoup 3.2
On OS X: Python 2.7.2 (provided by Apple), BeautifulSoup 3.2.0. It worked fine earlier today, but when I tried it a few hours later, it stopped parsing correctly.
On Linux: Python 2.7.5 (provided by ActivePython), BeautifulSoup 3.2.1.
Is there a way to make the downloader dump the html pages?
– The easiest one is by using Fiddler(http://fiddler2.com/get-fiddler) to act as a http proxy.
– Modify the script to dump the html:
2. Insert the dumpHtml() after parsing the image.
The modified script only spits out 0B Error pages. Fiddler is probably not the easiest, since I’d have to somehow get Fiddler accepting connections from the other side of the country.
Do I have to just wait until the next update? I can’t do any of this stuff.
Seems related to using English. I had the same issue.
Here is how I fixed it:
Log in to pixiv and set your account language to Japanese. Close pixiv downloader and clear out the cookie from the config.ini then try again.
A question that I do not want to experimentally test.
Does hash (#) comments tags out in tags.txt?
The link is dead. 😛
Fixed the url 😛
Download link is broken
Fixed the url 😛
Comments are closed.