39 thoughts on “danbooru downloader 20160816”

  1. Hi

    This is great Program. Thanks for making it!

    but, I want to more some function.

    1. at ‘Settings – Download – Filename Format’, I need %Date% and %Time% (updated).

    2. I use this program for Sankaku. well, searching tag to fav:’username’ is didn’t working. of course, -fav:’username’ too.
    but I need this search method.

    Immodestly, May I request that funtion update?

    and sorry for my unfluent English.

    Writing to English is very dificult 🙁

    Thank you!

      1. I tried using “fav:seikun7”, but It isn’t working for me
        though “fav:Nep_Fan” and other username is exactly working

        I guess because I have too many fav(84,247). maybe.

        error code is

        Download List

        Server Message:
        Status Code: ProtocolError (404 Not Found)

        Can I find way to solving this problem?

        1. Do you set your favorites as private? That’s the reason you got 404 error from the server.

          Else you need to provide the login information to the apps:
          1. Press F12 on your Chrome browser and select Network tab.
          2. Go to the booru site and login.
          3. Click one of the entry and copy the Cookie value from the Request Header.
          For gelbooru, it should like this: user_id=; pass_hash=
          For sankaku, login=; pass_hash=;
          4. Paste the Cookie value to the Username field.
          5. Set Login Type to Cookie. Refer to http://i.imgur.com/rCCjnPs.png

          1. Thank you for your help!

            Finally I solve the problem.

            It was private..

            Really Thank you

  2. nanadaka i have a question.

    everytime if i download some pictures from gelbooru it download less than the previous version.
    normally it would be 50 picture less when its download but now it more than 50-600?.
    please tell me if i can fix this?

      1. i have 6 log files log4net.dll and other 5 are log-2016 from 21 to 26 september except for 22 i dont have that one

  3. Why don´t you make videos on how to install them? Already did my work and it marks me error.

    1. You asking about Pixiv Downloader or Danbooru Downloader???

      Pixiv downloader required no installation,
      1. just extract the archive
      2. run the pixivUtil2.exe
      3. key in your username/password.

      Pixiv required username/password, go register on their site. Read the readme.txt for more info.

    1. Just extract the archive to your harddisk (e.g. c:\pixiv-downloader) and run the exe.

      You can change the configuration by editing the config.ini using notepad. Refer to readme.txt for details.

  4. how is it possibil to self add a site to the provider list ?
    i dont really understand who it works.
    What should I enter when I press the main tab to edit and then add?
    since so many fields are from I do not know what I have to enter because?

    sorry for my bad english ^^

    1. it is possible, as long the site provide api access to get the image list and post details.

      You can also edit DanbooruProviderList.xml using notepad.

  5. Hi Nandaka.

    I’m trying to use Danbooru Downloader to download from sites like Danbooru, Sankaku Complex and TBIB, but because my Indonesian ISP blocks those sites, Danbooru Downloader won’t work when I try to download from these aforementioned sites.

    I can download just fine from sites that aren’t blocked like Safebooru. But I can’t download from the blocked sites.

    Any way to work around this?

  6. hi, I don’t want already downloaded a image file once (even if the file is deleted from the folder), I would like to skip downloaded file(not a duplicate file names). Images that have already been downloaded in the past, we want to avoid is not it. Is there something that way?

    1. I used Batch Download, that is recorded ‘Batch Download on year-month-day.txt’. This is the list of image had already been down in the past. I think it would be possible to use the Skip downloading.

  7. Im having issues downloading from rule34.paheal.net, most times it works but sometimes it wont download the image and puts out an connection error those errors are always on the same pics that i want to download. Error from the log

    1st Error

    2016-08-23 08:32:03,594 ERROR – Download Error: http://rule34-data-000.paheal.net/_images/e610df5f1da2bd919fa788e4beff6418/1958854%20-%20Pearl%20Steven_Universe.jpg
    System.Net.WebException: Die Verbindung mit dem Remoteserver kann nicht hergestellt werden. —> System.Net.Sockets.SocketException: Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat 85.17.84.201:80
    bei System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
    bei System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Exception& exception)
    — Ende der internen Ausnahmestapelüberwachung —
    bei System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
    bei DanbooruDownloader3.CustomControl.ExtendedWebClient.GetWebResponse(WebRequest request, IAsyncResult result)
    bei System.Net.WebClient.DownloadBitsResponseCallback(IAsyncResult result)

    2nd Error

    2016-08-23 08:32:12,093 ERROR – Download Error: http://rule34-data-000.paheal.net/_images/48c3bbd7f1de038afb1e39f58317b790/1958125%20-%20Lars%20Sadie_Miller%20Steven_Universe.jpg
    System.Net.WebException: Die Anfrage wurde abgebrochen: Die Anfrage wurde abgebrochen..
    bei System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
    bei DanbooruDownloader3.CustomControl.ExtendedWebClient.GetWebResponse(WebRequest request, IAsyncResult result)
    bei System.Net.WebClient.DownloadBitsResponseCallback(IAsyncResult result)

    1. just found out it has to do someting with the link, this one “http://rule34-data-000.paheal.net/_images/e610df5f1da2bd919fa788e4beff6418/1958854%20-%20Pearl%20Steven_Universe.jpg” does not work because if u reaplace “http://rule34-data-000.paheal.net” to “http://rule34-data-008.paheal.net” it suddenly works, it has do to something with those numbers between 2data ” and “paheal”, i hope this info helps to fix it

      1. Those urls is returned by the server, and looks like they use some kind of load balancer. How do you know 008 is the correct one?

        1. its on multiply server currently it looks like there are 14 server (including 000) i tested 6 pictures now and it looks like there is one thumbnail server 005, and server 002 to 013 exept 005, are all full size image servers 000 and 001 never works, thats all what i found out rn. hope it helps to improve the danbooru downloader

  8. 2016-08-22 03: 37: 31,312 INFO – Logging Enabled
    2016-08-22 03: 37: 31,546 INFO – Starting up Danbooru Downloader 3.2016.08.16
    2016-08-22 03: 37: 31,546 DEBUG – Loading provider list.
    2016-08-22 03: 37: 31,578 DEBUG – Provider list loaded.
    2016-08-22 03: 37: 31,625 DEBUG – Danbooru Downloader 3.2016.08.16 loaded.
    2016-08-22 03: 37: 32,703 WARN – No tags.xml, need to download!
    2016-08-22 03: 37: 37,296 INFO – [Download Tags] Start downloading …
    2016-08-22 03: 37: 38,000 ERROR – Failed to parse: tags-rule34.xxx.xml
    System.InvalidOperationException: There is an error in XML document (1, 2). —> System.InvalidOperationException: is not able to come.
       location: Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderDanbooruTagCollection.Read5_tags()
       — End of inner exception stack trace —
       Location: System.Xml.Serialization.XmlSerializer.Deserialize (XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
       Location: System.Xml.Serialization.XmlSerializer.Deserialize (TextReader textReader)
       Location: DanbooruDownloader3.DAO.DanbooruTagsDao..ctor (String xmlTagFile)
    2016-08-22 03: 37: 38,000 INFO – [Download Tags] Private Tags.xml saved to tags-rule34.xxx.xml.
    2016-08-22 03: 37: 38,000 ERROR – Failed to parse: tags.xml
    System.InvalidOperationException: There is an error in XML document (1, 2). —> System.InvalidOperationException: is not able to come.
       location: Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderDanbooruTagCollection.Read5_tags()
       — End of inner exception stack trace —
       Location: System.Xml.Serialization.XmlSerializer.Deserialize (XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)
       Location: System.Xml.Serialization.XmlSerializer.Deserialize (TextReader textReader)
       Location: DanbooruDownloader3.DAO.DanbooruTagsDao..ctor (String xmlTagFile)
    2016-08-22 03: 37: 38,000 INFO – [Download Tags] Complete.
    2016-08-22 03: 38: 07,203 INFO – Getting list: http://rule34.xxx/index.php?page=dapi&s=post&q=index&tags=spazkid
    2016-08-22 03: 38: 07,562 DEBUG – Download list completed

    what can i do

    1. > 2016-08-22 03: 37: 38,000 ERROR – Failed to parse: tags-rule34.xxx.xml

      Delete the file and use another tags source (e.g. yande.re)

  9. Is it possible to get an “Included Tags” kind of thing that works exactly opposite to how “Ignored Tags” works?
    For example, if you were to do a batch job for long_hair now and use %searchtag%, a lot of potentially wanted tags wouldn’t be included in the filename whilst %tags% would include several unwanted tags.

    The only current solution that I can see which achieves the desired result of only including wanted tags in the filename is to use %tags% and ignore every single tag exept for the ones you want to have included, and that’s not really feasable.

      1. %searchtag% doesn’t do anything even remotely similar though.

        Doing a batch job on Konachan for e.g. “~censored ~uncensored” gets every post which has either of those two tags. That’s great. But %searchtag% then adds both of those tags to all the filenames even though only 1 of them is used in 99.9% of the posts on Konachan. That’s not great at all. A lot of those posts are also tagged with “thighhighs,” but since I didn’t search for it, it’s never included in the filename even though I would like it to be when the post is tagged with it. But if I were to add it to the search as a third tag using “~thighhighs,” it too would be added to every single filename even when the post doesn’t use it, making the filenames completely useless.

        It doesn’t work when searching for a single tag either for the exact same reason. When doing a search for “thighhighs,” %searchtag% will only ever includes that tag in the filename, even though a lot of the posts also have the tag “skirt.” I’d like that to be included, but I can’t, because then every single filename would have “~thighhighs ~skirt” in them, even when the post itself only uses one of them.

        What I would like to be able to do is to set up a list of however many tags I might be interested in (e.g. 60), one of which is “thighhighs.”
        Now when I use the filename format %tags% and do a batch job for “thighhighs” and the app fetches a post containing that tag, it will check if any of the post’s other tags match any of the ones in my list of 60. If any of them do, then the ones that match get added to the filename while the ones that don’t, won’t.

        If I had also added “skirt” to that list of 60, any post where the only matching tag was “thighhighs” would only have that tag in the filename, but any post tagged with both “thighhighs” and “skirt” would have both of those tags in the filename, because “skirt” was a match on my list of 60 tags.

        That’s what I mean by the opposite of “Ignored Tags.” That’s a list where only the tags NOT on the list will be included in the filenames. My list would be a list where only the tags ON the list would be included in the filenames, but only when the posts themselves use them as well.

Comments are closed.