70 thoughts on “Danbooru Downloader 201009117”

  1. You know yours works better than any of the other Imageboard DLers out there? And your PixIV one is awesome too.

      1. Actually! I have tried that one. Much better GUI, but not as much control.
        It can import sites. But you can do that on yours as well with some tinkering.
        Grabber wont let me specify I want the file named by the tag that searched [Which i use as the folder name]… That’s one thing that really annoys me..

    1. if you are using Full Batch Mode, currently cannot be done (no access to query). but if you are using the Main tab, disable ‘Generated’ check box and type pool:1214 in the query box (See Search Help) Sorry, just type pool:1214 in the Tags box.

    1. currently suspended because I’m rarely used it anymore. It is fine if someone want to modify it as long the source code is available.

  2. Hey there,
    i think theres a little bug in there.
    If you search on gelbooru and there are more then 1000 pics you cant get to site 2.
    Cause you add page=2 to the URL, but in fact its &pid=2, but im not able to compile your project (Dosent know anything about c#). is there anyway to fix it?

    1. gelbooru is using different engine/api, for now you must disable auto next page feature for that. The only way to fix is by adding engine detection for the auto page load.

  3. I dont get, how download from gelbooru?
    i enter “bleach&pid=200” in Tag textbox and disable Auto Load Next Page, and what next?
    I start download, and his download last 100 pics

  4. Hi
    I very appreciate to make this program.
    Until Yesterday, I didn’t know this kind programs.
    So I had save just one by one.
    Almost 300,000!!
    oh god..
    Anyway, Now I found the light!
    I really hope to reach to you how much I thanks.
    And I hope you consider my suggesting.

    First, I need all Tags.
    200 limit is short.
    If you can, Please make this free.
    And, I got a error message when I search over 1000page.
    Let me know how to solve this situation.
    Thanks for reading not good level English sentence.
    ^^

    1. usually the limit is from the server (hard limiter, will override if more than the defined value in the server). As for the error, can you give me the error message/log?

    2. Oh, How about this?
      If I start search, it shows new info ‘total page’, next to the ‘total count’.
      And, Plus. you’ll make new ‘page’ option.
      If we’ll know ‘total page’, We can control need pages.

      Ex) Page] [1000]~[1200] [Get].

      Or Plus, Search reverse option.

      In this case, can we download over 1000 page?

  5. Dear Nandaka,
    How i can download from Gelbooru more than 100 latest pics?
    tag+id:..100 or tag+id:100..200 dont working on Gelbooru.

    1. for gelbooru, you need to add this line: “&pid=some_number_here” at the end of Tag textbox and disable Auto Load Next Page.

      1. Doesn’t work… can you provide a screenshot? Also, will this work in Full Batch mode?

        Thanks πŸ™‚

    1. http://rule34.paheal.net/ <> append this in DanbooruProviderList.xml just before the last line(before </DanbooruProviderList>)

      <DanbooruProvider>
      <Name>rule34.booru.org</Name>
      <Url>http://rule34.booru.org</Url>
      <QueryStringJson></QueryStringJson>
      <QueryStringXml>/index.php?page=dapi&amp;s=post&amp;q=index&amp;%_query%</QueryStringXml>
      <Preferred>xml</Preferred>
      <DefaultLimit>20</DefaultLimit>
      </DanbooruProvider>

  6. So, will you update this tool again? It’s really good, but it’s a shame that you can’t download more than 1000 pictures.

  7. well, can’t you somehow program it so that it loads several times or something like that or anything that you can download all in one go?

    1. Actually you can set more than 1000, but the server will only return 1000 result, the limit is from danbooru, not me.

  8. I’ve been trying to get the one for rule34.paheal.com to work. But my knowledge of this stuff is terrible.

    I ended up with this, but it doesnt work. Plus I don’t know how to remove the “Tags=” part of the query…

    QueryStringXml: /post/list/%_query%

    It makes the program crash when I try it haha.

    1. Yeah, that site is using different kind of danbooru engine, you need to get the xml/json and make sure the format is identical with the danbooru one.

      1. Gotcha, makes sense.
        No worries though. Thanks for the great program! I’ll use it on the other sites.

  9. Thedoujin doesn’t work properly, since they use a child post system so the program only downloads the first page.

  10. Hey there
    i love ur ap
    but how do i get it to work for 3d booru
    i keep getting “error 403” on every pic except the first one

    everything else is awesome

  11. Hello! Thanks for that great tool, works perfect πŸ™‚

    I have a question: is there any way to configure downloader to download .jpg version of image instead of .png? For example, on http://oreno.imouto.org/ I can choose .jpg or .png download, and downloader seems always to go for .png.

    Thanks in advance.

      1. Ok, I wrote a post-processor for Danbooru Downloader’s saved download lists, which corrects links from ultra-large .png to .jpg versions (for oreno.imouto.org), and also deletes duplicate images from list. The list can be loaded then back into Downloader.

        I can upload it somewhere if that program is of any use.

        BTW: moe.imouto.org moved to oreno.imouto.org

    1. I would like this please. My only issue with this downloader is it doesn’t filter out duplicates based on md5sum.

      I’ve been prefixing my filenames with the md5sum so I can go in and delete duplicates from multiple boorus when I’m batch downloading from them.

      I would like it even better if in the batch mode it would simply ignore duplicates and not download them.
      Even more neat would be if it kept a database of previously downloaded md5sums/download location and not download them again, instead make a shortcut/.lnk to the original download location.

      1. Currently I’m focused on Pixiv Downloader, so I can’t tell when if I can add those features. Maybe for now you can check this: Grabber, it has the capability to merge multiple *booru search and quite frequently updated.

      2. Unfortunately the “merge” is useless. It just merges the display results but does not sort out duplicates.

      3. What really sucks is… a friend of mine was writing her own perl script that would not only leech from multiple boorus and sort out duplicates… it would save the metadata to a searchable db. But when she was almost done her hard drive died. And now she doesn’t feel like re-writing it again.

      4. Sorry for leaving so many comments…. but honestly the “danbooru downloader” add-on for firefox seems to be the best thing. It just autosaves the images you view with the filenaming scheme you want. But too bad it isn’t automatic… at least it autorenames older downloads and it doesn’t seem to download duplicates from other boorus.

      5. Or rather I should say… it is automatic… but you still have to visit the image page for it to save it. No batch downloading capability.

  12. As a followup to my last comment, It would appear that the problem was the fact that using the program with Gelbooru does not work at all, instead returning various errors.

    1. Gelbooru (and other *booru not based on original danbooru engine, they rewrite the engine) using different/incomplete API for getting the post list. Some of the field cannot be used, you must supply the parameter manually. For reference, you can see the API parameter in here, compare it with danbooru in here.

  13. Hello. I just kinda stumbled upon this blog, and after looking through a few pages, decided to try out the Danbooru Downloader.

    Seems promising, except I have no idea how to use it o.o

    Is there a readme anywhere?

  14. Can you add in a feature, say set how many files to download at once, aka that way we can speed up the download

  15. ok, but it was just perfect for those who use a local danbooru, like me, (only download, without having a powerful search engine, it’s a bit pointless) anyway, no matter, i tried :), bye.

  16. not be able to add the function to download and automatic upload on another danbooru-based site, with tag rating and source? a sort of import-export function for danbooru engine sites. would be really fantastic

    1. My application only used for download (not supporting upload, hence downloader), but if you want to add more sites, you can just edit the DanbooruProvider.xml as long they are use danbooru/Shimmie/gelbooru based.

  17. yea the max limit is 1,000 which isn’t much of a batch if you’re trying to download a tag of ~9,000 images from danbooru :<

    1. That is limited from the danbooru website, each request of json/xml only allow to return 1000 result (or fewer), you must put multiple request (example: page 1,2,3,etc. with 1000 limit). I haven’t automate the page forwarding for batch yet.

  18. How to use this full batch? It download all pages of an tag?
    For me, downloaded just 20 images of the tag “flandre_scarlet” . only the first page of 448 pages

  19. Mohon maaf bila request saya merepotkan…

    Apakah bisa aplikasi ini dibuat tanpa memerlukan .Net Runtime ? (misal dengan C++ atau Python yang runtimenya lebih kecil)

    Sebab saya sebagai pelajar belum menguasai dasar dasar .Net…

    1. Perlu total rewrite, soalnya aplikasi ini menggunakan kelas WebClient yang tersedia dari .Net. Sebenarnya .Net (C#) tidak beda jauh dari Java, dan kalau soal runtime seharusnya sudah termasuk kalau pakai Windows 7 atau tersedia dari CD/DVD dari majalah komputer (Chip/InfoKomputer/etc).

Comments are closed.