danbooru downloader 20150923

Change Log for DanbooruDownloader20150923

  • Implement Feature #51: Allow option to ignore/blacklist only for General tags.
  • Fix Issue #57: Replace tags if over the limit with ‘multiple_tag_type’.
  • Implement Feature #57: Take tags only the specified limit option.

Download link for danbooru downloader 20150923, source code in GitHub.

Donation Link on the side bar ==> 😀

27 thoughts on “danbooru downloader 20150923”

  1. it’s work, but i was having to edit my “DanbooruProviderList.xml” to set to html prefered. I do search, it fail, edit the cookie tree times ‘userid, pass, tag blacklist’ and now the batchjob work fine. You program still have memory issues (like leaks) but works fine now. The only need should be a way to save the cookie for further uses ^^’ but no urge.

    1. ah crap, i forgot that I already have html parser to download from gelbooru 😛 no need to login, but maybe I’ll add the feature sometimes.

      Edit your provider list like this: http://imgur.com/pZe4Pp8

      sometimes I forgot my on app 😀

      crap forget it, I think they also limit the download, cos I got no file_url after few images…

      1. got another manual way:
        1. login to your gelbooru and copy the user_id and pass_hash from the browser cookie. http://imgur.com/GVeJkI8
        2. do 1 search using the app, it will fail, and then go to the settings page. Click the cookie button.
        3. modify the cookie name and value, you may need to do another search to get new cookie line. http://imgur.com/wRmuCPc
        4. close the window and start searching again. by right, the API access should be open.

        I bit busy with works, so expect slow updates.

  2. Hey, is there a reason why Danbooru and all Sankakus force loop when updating tags .xml? It’s a pain as I cannot set limit

    1. Sankaku need to loop because they doesn’t open the direct tags.xml download, so I need to parse the tags page one by one (which have thousand pages).

    1. That because your ISP block the access. Some ISP actually do the blocking using DNS block, so you can try to replace the DNS manually. If they do using ISP level firewall, try to use proxy or get a VPN?

  3. Hi, apologies for the simple/noob question but when i use full batch mode it says it’s downloaded X amount of files but the file is no where to be found on my PC
    do I need to do something extra to get them, i just folled the readme instructions

    1. by default, it should be saved to the same folder with the application. You can check the log file (search for line: [DoBatchJob] Saved To:)

    1. I can’t use it with sankaku… The tags will NEVER finish, there’s 13000 tag pages and there’s a limit to how many connects every 100 pages… Impossibooru

      1. yep, it is a problem because sankaku block the direct xml download. You need to ask the owner to open it again, or need to enhance the parser to actually parse the image page tags data (please request on github)

        1. When downloading from Sankaku the program will or will not work as intended. When it doesn’t work it just says the batch is complete without downloading anything…it’s really frustrating, and it seems to come and go at random. I’ve tried closing and restarting the program, and even running in administrator mode, all to no avail.

          Also, on a separate note, when you say to request on github what do you mean? Suggestions about this program, or the person running Sankaku?

          1. Yeah…I even logged in…didn’t make much difference. Weird thing though…when I entered a one string search it actually started to work as it should. I tested it again by entering a two string search term, and no dice. To make sure it wasn’t a typo, and there was actually subject material related to what I was searching for, I pasted what I typed into Sankaku’s search bar, and yes, it pulled it up without issue. I got no idea what the problem is…all I know is I’m trying to refine my search as best as I can, and it’s not letting me do it.

            Also, yes, I did make sure that limit was set to something like 9999, and page was set to 1. Setting the page to 2 didn’t help any BTW. 😛

          2. Just found this out: two tags will now work, however what seems to be breaking it is the loli/shota tag. It will download things just fine if you enter in loli/shota as one tag, but if you add any modifiers (example: shota milf), that is when you will not be able to download anything from Sankaku which is a problem if your trying to narrow things down to a specific theme. Also, yes, I did try entering my username and password, and that didn’t work. 🙁

          3. I found out the issue, apparently I replaced the space to underscore when generating the query string. Please use plus sign (+) between each tag, and it should be working.

  4. Good day, Nandaka.
    Thank you for this app ^^
    But, please, correct changelog.
    It from previous version (0429).

  5. How can I make the program download pictures from this site?

    http://the-collection.booru.org/

    I got it to make a list with a Gelboory type search for this HTML string “/index.php?page=post&s=list&%_query%” but it can’t get the url to the images.

  6. In case anyone else had the same problem as I did at first, and was unable to extract the archive, update your extraction software. The archive is using a (relatively) new compression method which is not supported by some older versions of 7-Zip and WinRar. Updating to the most recent version should fix the issue.

Comments are closed.