/t/ - Technology

Discussion of Technology

Index Catalog Archive Bottom Refresh
Mode: Reply
Options
Subject
Message

Max message length: 8000

Files

Max file size: 32.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and postings)

Misc

Remember to follow the rules

The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 2.0.

Be aware of the Fallback Plan

8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

(4.11 KB 300x100 simplebanner.png)
Hydrus Network General #2 Anonymous Board volunteer 04/20/2021 (Tue) 22:48:35 No. 3626
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
https://www.youtube.com/watch?v=i_u3hpYMySk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v436/Hydrus.Network.436.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v436/Hydrus.Network.436.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v436/Hydrus.Network.436.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v436/Hydrus.Network.436.-.Linux.-.Executable.tar.gz I had a great few days mostly cleaning and fixing things. If you sync with the PTR, update will take a minute this week. macos release polish I cleaned up the new macOS release. It seems to have launched and otherwise generally worked last week, but there was a bug in finding the specific database location macOS users are used to. Without the '--d' launch parameter, it was creating an empty new db inside the app, in the 'db' dir hydrus would normally use (and the really old App used to use, if you remember that), and hence would say 'hey, this looks like the first time you are running the program...' on boot. I have fixed the 'I am running in an app' detection and the ~/Library/Hydrus database path calculation routine, so everything should be back to normal. It also has the old readme and Applications shortcut in the dmg, and the filename should be fixed too. I expect this to be the only macOS release I put out from now on. Let me know if you have any more trouble! miscount fix Last week, I made the number in the 'pending (1,234)' menu title add up in a more efficient way. Rather than counting raw mapping rows every time, it uses a table of pre-computed numbers, the same used for autocomplete results. It turns out there were some legacy (from a long time ago) miscount bugs in there for some users. This resulted in a 'sticky' number that would not go away even after committing. A maintenance routine exists to fix this, but it is a sledgehammer when we need a scalpel. So, I have written a maintenance routine to regen this pending data efficiently and correct these old bugs. It is basically the same as I did a few months ago with the 'display' caches during the siblings and parents work, but for a deeper level of tags. It will be run on update, along with a new thing that forces the menu's count to regen, both of which can now be accessed from database->regenerate menu in case we need them again in future. If you sync with the PTR, it may take a minute or so to finish. I hope this will fix the issue completely, but if you still have a bad count, or if your count drifts off zero again over time, please let me know! underscores After discussion with some users, I have added an experimental setting to options->tag presentation that replaces all underscore characters in tags with space characters, as long as you are in 'front-facing' UI like regular search pages or the media viewer. It works on the same system as the 'hide namespace' option--and siblings--in that you still see the raw truth in manage tags and other edit locations. This setting is experimental since it will add a bit of CPU lag to tag presentation and may result in some seemingly duplicate rows. I have long planned to fix the underscore issue with a really nice system, but I was convinced that adding a hacky system in the meantime would be a good thing to play with. If you care about this issue, give it a go and let me know if you run into any problems.
full list - macOS: - I fixed an issue with last week's Big Sur compatible release where it wasn't finding your old database correctly--it was defaulting to a different location, so without a specific launch command otherwise, it started a fresh db and said 'hey, looks like first time you ran the program'. if you are a long-time user of hydrus, please install and run 436 as usual, it should figure out your old db location correctly as ~/Library/Hydrus without any launch command override needed - If you never ran any of the old macOS builds, and you started using hydrus for the first time on macOS last week with the experimental Big Sur compatible build, your brand new database is in a funky location! don't update yet, or you will delete it! You will want to copy your .db files and the client_files folder from inside_the_435_app/Contents/MacOS/db to ~/Library/Hydrus, which should for most people be /Users/(YOU)/Library/Hydrus. feel free to ask for help if you can't figure this out - fixed a 'this is macOS' platform check for newer macOS releases, which ensures the 'userpath' fallback is correctly initialised to ~/Library/Hydrus - fixed the new macOS github workflow build script to tell hydrus that it is running from inside an App, so it knows to default to the userpath fallback correctly - the macOS build now has the old filename - it also has the ReadMeFirst.rtf file and Applications shortcut - collected the new build-related files in static/build_files, which will likely see more files in future - . - pending tag cache regen: - two new maintenance tasks are added to the database->regenerate menu--one that forces a recalc of your total 'pending' count as used in the pending menu, and one that recalculates the cached pending tag mappings for storage tags (just like the display one added some time ago, but one layer deeper). the menu entries are relabelled appropriately - these routines will be run on database update, and should correct the bad pending menu counts many users discovered last week (the new efficient way that the pending count is calculated exposed some legacy bad cached pending storage mappings entries. we'll see if they come back, or if this is just clearing up bad counts hanging around from ages ago) - the quick pending mapping cache regen routines take a little longer to initialise now, but they now clear out surplus tag data, rather than just regenerating the 'correct' tags - . - misc: - added an experimental setting to _options->tag presentation_ to replace all underscores in tags with spaces. this is just a render rule, so it will only apply in front-facing 'display' contexts (a bit like how siblings work in search pages, but you see the truth in _manage tags_), will consume a little more CPU with big lists, and may result in some duplicate rows, but let's see how it goes. this is basically a quick hardcoded hack until there is a more beautiful solution here - in the two 'Duck' dark QSS styles, removed fixed font size on button labels that wasn't scaling on high DPI screens - the filename tagging panel now shows parents and siblings correctly on the 'tags for all' and 'tags for selected' taglists. I'd like to show siblings and parents in the file list above in future, but it'll be a bit more tricky to do neatly and without megalag - GUGs and NGUGs now report their reasons for not being functional in the downloader selector list and subscription errors. typically this will be a missing url class or an url class missing a matching parser, but more complicated example-url-parsing errors will also be outlined - fixed a bug in the client api in the set-cookies call when no cookies are set, and ensured all cookies added this way are saved permanently (before, some could be lost if that domain was not used in network traffic before the next client shutdown) - the 'refresh account' button in _review services_ now works on the new async system. it presents errors nicely - a repository's current update period is now stated in its review services panel - review services now says 'checking for updates in...' rather than 'next update due...', which is more accurate and will matter more with small update times - fixed some false positive instances of 'this server was not a tag repo' error in the network engine. - the hydrus server now also outputs hydrus specific 'Server' header (rather than some twisted default) on 'unsupported request' 404s and any other unusual 'infrastructure' 4XX or 5XX - if the repository updates in the filesystem are lacking some required file information when calculating what to process, the client now queues those files for a metadata regen maintenance job and raises a cleaner error - just as a safety measure, if a repository ever happens to deliver a metadata update slice with a 'next update due' time that has already passed, the client now adds a buffer and checks tomorrow instead - a new program launch argument, db_transaction_commit_time, lets you change how often the database's changes are committed to disk. default is 30 (seconds) for client, 120 for server - altering the repository update period now prints a summary of the change to the log - updated the ipfs links in the help - updated the main help index.html and the github readme.md with the user-run repo and wiki at https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts next week I may or may not be tied up with IRL stuff for a bit. Once I am back to things, I will keep working on smaller issues and get started on the pre-work for multiple local file services. There are several hundred locations where the 'my files' service is hardcoded as the local file reference, so a decent part of this work, before I get to file service migration and new location import options, will just be putting some time into ancient code.
Just updated to the new version. The ghost-pending-tags issue seems fixed. Thanks!
I found a small problem with watchers. The status column of the watchers list doesn't sort by actual time when you click on it (to sort by status). Instead, it seems to sort by the first number of each watcher in the list, so for example, "15 hours" is treated like it's bigger than "6 days".
>>3638 As far as I know internally the status isn't "updating at X" where X can be sorted, it's "an update is planned at some point in the future"; the sort is then made from the subject, alphabetically. Could this be what you're seeing here? (I'll state here that I personally like this behavior)
>>3639 Actually you're right. Sorting by status doesn't sort by status at all. It's just identical to sorting by subject. I wish there was a way to sort them by status, so that the ones closest to being updated would always be at the top (or bottom) of the list.
can someone make a downloader/parser for https://blacked.booru.org/ please one exists for bleachbooru but not blacked
has there been any consideration about offering an option to compress backups, then decompress upon re-importing (no loss of quality, or hash changes)? realistically, how much space could it save? could it drop a 200gb backup to 150 or 100gb? If so then it's worth it imo, but just dropping it from like 200gb to 290 not so much.
>>3650 i'd also suggest adding progress bars to popups such as backups, maintenance jobs, etc. currently it's just a small box with no indication of how far along the process it is through, or an eta of when it will be done.
>>3650 I use ZFS to compress all files managed by hydrus. Using the zstd-19 (best compression) algorithm, I get a compression ratio of about 1.10. Of course, it depends on what you are actually storing though. gzip-9 was about 1.14. >>3652 see >>3628
Is it completely safe to stop a manually initiated (full) vaccum? There is no stop/abort button, just the Windows X button the close that dialog. Also are there any benefits to doing a full vaccum every now and then? Or are the gains negligible?
>>3654 >I use ZFS to compress all files managed by hydrus. This is actually a great idea, I haven't messed with much outside of the realm of ext4 yet. I planned on using ZFS on my future server build, obviously play around with it a little before hand. But I was always curious use cases for features like this in ZFS, I guess this makes sense. How much of a toll does it put on your CPU and drives? The only issue I see with this, is it's not as universal say for users running Hydrus on Windows. >I get a compression ratio of about 1.10 I'm honestly too stupid to understand compression ratios. Do you have a real world example of a 1.10 ratio? Like xGB becomes xGB?
>>3654 >>3657 Also is it gonna screw with all the hashes of my files? Like if I only compress my backup folder, is the backup file, going to have a different hash after the compression/decompression than the one in Hydrus (or a copy of that file located somewhere else)
>>3657 >Do you have a real world example of a 1.10 ratio? On ext4, 110G of files takes up 110G worth of space on your disk. With their ZFS / zstd config. their 110G of files takes up 100G of space, saving 10G. Look for "data compression ratio" on Wikipedia, it's simply "ratio = uncompressed / compressed". Not trying to start a flame war, but I personally use Btrfs, also with zstd, and I'm "only" getting a 5% (so 1.05) ratio, mostly because of incompressible files. >>3658 No, both ZFS and Btrfs use what's called "transparent compression", meaning that even though the file is stored compressed on your disk, the OS will see it as a normal file with its normal hash. As a side note, this is also known to mess with tools like du (because they count what they see, not what is stored), which you have to replace with compsize on Btrfs for example. Of course, there's a CPU cost to all of this, but my machine is pretty beefy so I'm not feeling it; ymmv.
What can the client API do? I might try to make an automatic reencoder if I get enough free time, but that would require searching for files based on tags and filetype, exporting the file paths, importing files from a folder, mapping tags between files, setting file relations, adding tags, sending files to the trash, and setting file relationships.
>>3650 Compressing images and videos will do jack shit. You may save a tiny bit of space from the few files that can be compressed, though. Maybe 200GB to 199GB or so, definitely not 150GB. Might save another tiny amount by compressing into a single archive file as well, but that's only useful for backups.
I'm planning on transitioning from the Hydrus built in backup to the third-party backup system that I use for my other files, (deja dup) since you said those are better anyway, but I have 2 questions: When running the third-party backup, is it safe to still use Hydrus, or at least have it open, or should Hydrus not be running at all for the duration of the backup? The help section of the Hydrus website mentions a few different things that you need to back up, but it confused me because they seem contradictory. What are you supposed to backup when making a backup of Hydrus? Do you just backup the entire application, or is there a specific directory where the database is stored, and you can just backup that?
>>3687 10% is not something I would call "jack shit", but it really does depend on what data is stored. I have a lot of images and videos and get 10%. Compression and decompression will however take a fair bit of CPU time, so I wouldn't compress it if I had to do it every time I take a backup. But with next gen filesystems, why not? ZFS even checks if the compression would be worth it and doesn't compress files unless a good ratio can be achieved.
>>3656 it's still going bros >.< it says I can't abort it, unless I hard kill hydrus will it fuck up my db
>>3703 I did it multiple times before and nothing happened to mine. I don't know if that means it's safe or if I just got lucky though.
Hi. File permissions in tar.gz for Linux are too broad. Should you maybe do something like this after the compilation? #!/usr/bin/env bash declare -r WD="/path/to/your/hydrus" find "${WD}" -type d -execdir chmod -c 755 {} \; find "${WD}" -type f -execdir chmod -c 644 {} \; chmod 755 "${WD}/db/sqlite3" chmod 755 "${WD}/bin/swfrender_linux" chmod 755 "${WD}/client" chmod 755 "${WD}/server"
>>3630 Thanks m8, had several reports now, it seems my fix covered it all. Please let me know if it comes back. I think it was all legacy stuff, but we'll see! >>3638 >>3639 >>3648 Thank you for this report. Yeah, when I was figuring this stuff out, I hacked in that behind the scenes, the values for DEAD/working/pending/checking in xxx are actually like -4, -3, -2, -1 or something. Since all the -1s are equal, it then secondary-sorts by the columns in order, so basically the subject. The 'checking in xxx' text is actually decoration on a blank 'status' of 'not working atm'. Updating it so the actual check times sort by time due is a great idea, I'll update it. >>3649 If you feel brave, you may be able to figure this out yourself! Check the url class and gallery url generator (gug) dialogs under network->downloader components and duplicate everything for bleachbooru and just update all the url info to the new domain, then manually set the 'url class link', and I think you'll be good! >>3650 >>3687 >>3695 My general policy here is backups/compression/encryption aren't really my wheelhouse, and there is a ton of third party software that does it great already, so I should be very careful putting my own time into reinventing the wheel. If you have a complicated situation or requirement, I recommend you look for other software that can do backup and compression. I have a little guide here: https://hydrusnetwork.github.io/hydrus/help/getting_started_installing.html#backing_up tl;dr: FreeFileSync is great. As said, lossless compression for media tends to be not great, since it is already well compressed. The database files themselves can be compressed very well, usually at least 50% savings. 7zip is fine.
BTW, to the Anon talking about excess writes in the last thread: thank you for bringing it to my attention. I was skeptical that file imports were causing too much write activity, so I did some IRL tests on my big (2 million file) client. I have noticed on and off that it has sometimes had a bunch of read and write i/o stacked up when I have casually looked, but I always chalked it up to PTR processing. After my tests, I believe bigass clients, particularly those with A) big UI sessions and B) thread watchers or other downloaders doing work, continually changing the session, will result in a whole bunch of excess writes due to 'last session' changing and saving by default every 5 minutes. Please try increasing your auto session save time up to 15 minutes or higher (options->gui pages). I have now started a plan to break up the monolithic session save, which has outgrown the initial object prototype in the same way subscriptions and the bandwidth manager did. I will split up session save to be more granular, saving separate objects per page and only on changes, and I believe this will reduce writes significantly on big clients. Furthermore, this accelerates my longer term plan to have continual lightweight passive CPU and HDD tracking on all jobs, a bit like how your phone will track app battery and data usage, so all hydrus users will be able to load up profile pie charts and we'll get a better idea of where the latest bottlenecks and resource hogs actually are, rather than guessing and testing so much. Thank you again for your comments and help.
>>3656 >>3703 >>3704 It is generally safe to force-quit hydrus. Usually the worst that can happen is losing 30 seconds of work. There's a small chance Hydrus might complain when it boots up again, or take a minute on the boot splash screen to put itself back together, but it'll recover. SQLite is really great at recovery. Nothing should be corrupted by a process kill, the chances are so remote that they aren't worth talking about. Nothing should be damaged. Because of something special I do with the separate database files, there is a small chance (I'd estimate miniscule chance in normal situations, less than 1% if you quit during PTR processing) that some files may get a transaction but others will roll back, in which case hydrus will generally recover fine, but may end up moaning and wanting to do some PTR reprocessing or something. The bad thing you don't want is for your computer to suddenly lose power. All bets are off in this situation, and if your hard drive was busy writing 150MB/s at the time it lost power, it could well corrupt some pages in your database. These are most of the problems I deal with when I help people. Sometimes the damage is completely recoverable with some terminal work in install_dir/db, sometimes is a real pain in the neck and you better hope you have a backup. Get a backup and a UPS for your computer, then you are comfy. I think I am going to disable auto vacuum. It isn't so useful overall, and often ends being more trouble than it is worth. You can turn it off under options->maintenance and processing.
>>3684 I think you can do everything there but file relationships and delete, maybe you'll need some hackery dackery script-side client_files path variable to figure out hard drive path, if you want to read off disk rather that get the file over the API. File relationships and more file action commands will come--I am slowly filling out the API and want to add basically everything to it. Here's the full list of commands atm, I keep this updated with versions: https://hydrusnetwork.github.io/hydrus/help/client_api.html >>3694 Here's my guide: https://hydrusnetwork.github.io/hydrus/help/getting_started_installing.html#backing_up 1) Turn off the client when you backup. I do it every Wednesday, before I update. Takes about 10 minutes to scan and 10 minutes to update a 2 million file db to an 8TB USB drive. Do not use a cloud service that continually watches and updates a directory unless you can ensure it only runs when the client is off. 2) Back up your install_dir/db folder. If you have moved stuff with database->migrate database, then have a copy of all those places too. As simply as possible, you need to backup: The four client*.db files The 256 'fxx' sub folders under client_files. Which by default are under install_dir/db. If you have copies of them, we can stitch a client back together for you. Note if you are on macOS, your db will be stored in ~/Library/Hydrus instead of install_dir/db. Hit file->open->database directory to see for sure! >>3705 Thank you, I will check out what I do and update my script!
>>3712 >Do not use a cloud service that continually watches and updates a directory unless you can ensure it only runs when the client is off. Does this also apply to the backup directory? For example, will it fuck up if I setup rSync to read only my Hydrus backup directory, to another server
>>3710 >I think I am going to disable auto vacuum. It isn't so useful overall I had ran this manually, I was curious to see if I would see any benefits from doing it. It said only a few hours, but that turned into like 2 days. I just force exited it and things seem fine. I did notice a bunch of errors behind the job dialog before quitting, wondering if maybe it got stuck dunno. I would check if you have some kind of abort if it encounters errors, as well as adding a progress bar. >>3710 >sometimes is a real pain in the neck and you better hope you have a backup. of course, but I am getting to that point where I need to buy a new one lmao. I'm downloading so much.
Wanted to throw this here incase it got lost on the other thread, someone made a new booru client for Android and supports Hydrus. Also FLOSS unlike Animeboxes. https://github.com/NO-ob/LoliSnatcher_Droid
(60.33 KB 882x731 e36.jpg)
why does hydrus show a total of 200gb in how boned, but my backup is 250gb? does how boned not show tag database size stats? can that be added there please. also add an tooltip explanation of what triggers being "boned" or not. also when you click update backup, can a popup show showing how much space the backup is currently using, where it is currently located, and how much it will grow when you update the backup? current backup size: 250gb estimated post backup size: 300gb backup path: /dev/sdb1/backups/hydrus do you want to continue with the backup? yes or no.
>>3717 Boned is how much is in your inbox versus your archive. It doesn't show the extra stuff like thumbnails and tags. Your backup might also be bigger because you've deleted files since then.
I just used PixivUtil2 to download someone's pixiv, to get the ugoira, but in importing, virtually none of it matches the hydrus download of the same pixiv. It's probably reproducible with any account, but PixivUtil2 downloads so fucking slowly, especially in this case, so I will just use the one account I used here. https://www.pixiv.net/en/users/515040 pixiv id:515040 (1,693) 1,739 successful (345 already in db) Only 39 of the files were webms, the rest are just images. Only 345 of all files overlapped between hydrus and PixivUtil2. Hydrus can't download pixiv ugoira at the moment. To me this is an enormous waste of space. Never doing this again. I know it's probably because pixiv removed metadata from their images at some point: https://github.com/Nandaka/PixivUtil2/issues/807 But I didn't expect PixivUtil2 to magically download the old images, where hydrus downloads the new ones. Or maybe something else is happening here to cause it.
>>3718 >Your backup might also be bigger because you've deleted files since then. can't check atm, but I'm almost positive I don't have an extra 50gb of deleted files. if anything like 5gb is deleted files. I usually perma delete stuff when I do. If I delete them in Hydrus, then run an update backup, deleted files should be removed from the backup too. So I have zero idea why the backup would be +50gb
(81.97 KB 331x251 well shit.png)
I assume the hydrus companion browser addon doesn't work with very old firefox versions? I'm using Waterfox Classic, which uses the legacy addons.
>>3721 >I'm using Waterfox Classic Consider Librewolf. https://librewolf-community.gitlab.io/
>>3723 >No Telemetry >LibreWolf is always built from the latest Firefox stable source (which has forced google telemetry)
(18.17 KB 253x255 areyoubraindameged.png)
>>3726 >forced
>>3727 https://archive.li/hF6KB What would you call shipping it with google analytics on because "It is extremely useful to us and we have already weighed the cost/benefit of using tracking."? Sure, after much bitching you can now opt out, but by default it's still forced on the users.
>>3723 Looks like it's based on the new Firefox, which means it has borked mouse gesture support. It doesn't work on all pages, for example an empty tab. Yes that's a big deal for me.
>>3728 "Forced on by default" is quit a bit different from simply "forced". I wonder anon, do you think that variable is turned on or off in a browser that has browser history and persistent cookies disabled by default? Stop being fucking retarded, this faggot is using (((Waterfox))). Librewolf is obviously going to be less cancer.
How about an advanced inbox filter? Basically you designate keys (e.g. number keys) and what each of them do, like archive, delete, set rating, and set tags. Maybe have all other shortcuts that use these keys be ignored within the filiter for simplicity sake. I often sort my files between archive, trash, files I have to crop, and files I have to mess with (e.g. zips and psds), and there isn't a good way to do it currently.
>>3742 >files I have to crop Can you provide context to this one? I have cropped images before too. but was called autistic for it. Didn't know anyone else does this.
>>3743 For the most part it's either watermarks like the second pic or shit that ruins the pic like the first one. Also long videos where I don't care about the overall structure (e.g. JAVs).
>>3744 And for comparison this would be the end result.
>>3744 >>3745 It was an offhand question, but I don't think our use of cropping is comparable at all. First of all, for examples like your right image, I have in the past come across an image that just has a bar the hosting site added, and I never found another source. But I don't remember what the image was by now. But for virtually everything else, I have succeeded in reverse image searching and finding another source. Let alone that for this specific example, when I reverse searched with yandex (clicking the shield top right to switch from "moderate" to "unsecure" for uncensored results), I think I maybe found a source for you of some sort: https://namethatpornstar.com/thread/715670/ >We Live Together – Alyssa Reece, Dani Daniels & Elisa - Touchy Feely Googling that shows a video featuring them in that outfit at the first result, but finding an original source for the image set must be possible with these tags. For "shit that ruins the pic", that requires far too much tact for me to've ever done that. I've stitched images before, though, but to me that's slightly different. I used to just crop habitually, like, so much. I've recently really slowed down on it, but you can see where I once brought it up and was called autistic for it here: https://archived.moe/h/thread/5844953/#5848468 I brought it up again elsewhere and actually wasn't bullied for it: https://desuarchive.org/trash/thread/32822723 But yeah, it's not the same as you at all. For me it's 99% just a plain crop.
>>3746 >that requires far too much tact for me to've ever done that While there's more autistic stuff I have done, a lot are just rectangle select -> crop to selection edits, like this pic for example. >archive Yeah, I'd classify that as autism. I can somewhat understand that given that the area is more focused and therefore tags are more specific it's not like there aren't benefits, but that looks kinda fucky due to the simplistic crop which triggers my autism,, and negates all benefits from third party tagging such as tagging sites and the PTR.
>>3747 Listen mate, I am way too incredibly tired to pretend to give anyone the benefit of the doubt anymore when they use "autistic" as an insult. It literally means nothing but that you're socially unacceptable. That's it. Literal nothing else. To call someone/ something they do autistic is to say you have nothing to say but parrot what's socially acceptable. But to lazily finish the post otherwise, your spoilered sentence is largely incomprehensible to me anyway, and doesn't seem to've made a coherent point besides "and negates all benefits from third party tagging such as tagging sites and the PTR". I don't sync to the PTR, I crop for viewing experience. I also don't see why a crop from a tagged image would lose the tags, I could just copy them over, and not cherrypick what redundant tags to remove. Either way, to me a panel of a doujin is now viewable as if it were any other image. Buzzwords of "autism" super convince me that I'm evil for doing this, man.
(90.09 KB 1872x813 Untitled.png)
I just realized my "url import" page (which I only use via "hydrus companion" > right click > "send to hydrus") hasn't imported a damn thing since I updated hydrus the same day the latest release dropped, six days ago.
Did Sankaku do something to break whatever default download Hydrus uses? All sankaku links are giving me "ignored - could not find a file or post URL" On that note, is it possible to view the default downloaders logic included in hydrus? I know you can import/export downloaders, but I don't see the already included downloaders in the menu.
>>3684 I have some bare bones of an autoreencoder if you want to mess with them. This was a personal project I wrote and only ran once to do a bulk reencode. YMMV, watch your step, slippery when wet, etc. https://anonfiles.com/5bB2t8t6ue/app_js https://anonfiles.com/B3Betft2u2/hydrusconfig_json https://anonfiles.com/P8B3tet6u6/package_json
>>3756 I mean I guess I'll just download these images through other means, I thought there would be a release after 7 days since the last one, which might've fixed this.
I had a good week fixing bugs and cleaning old code. Client performance is improved for large sessions, particularly those that have 'collected' page media, and I got started on multiple local file services by overhauling the core file delete/trash/undelete system. As a neat side-effect, the client now remembers when you delete files. The release should be normal time tomorrow.
>>3756 >>3779 Did the page get paused somehow? Maybe a misclick? It looks like URLs have been queued up, but if they are sitting on that blank 'status', it could be the file queue on that page is paused. There should be a little pause/play icon button on the management panel on the left side of the page. If it is not paused, does the page itself have any status, like 'network traffic is paused' or 'waiting on bandwidth' or 'this domain has recent errors', or similar? If it is just all blank, is there any chance you have more than 10 other very long-running downloaders in other pages running at once?
(1.08 MB 1778x1050 Untitled.png)
(73.28 KB 1778x1050 x.png)
>>3788 I just booted my client normally and kept doing as I ever was before, then only noticed it wasn't working six days after the fact, and made the post complaining right after. I opened a new page of the same type to confirm what the pause/play icon looks like at default, but that didn't matter anyway since if I toggle mines and leave it there for a while, nothing happens. But I do leave it with the pause icon displayed, to match the new page pause/play icon. I don't have any downloaders running, I keep my client open 24/7, so there is lots of downtime where nothing is actively downloading besides subscriptions. When I add something to download, it starts downloading it instantly more often than not. This "url import" page has done nothing since I updated.
I think I just reversed the import order of some offline files I dragged into my hydrus, but I haven't been able to reproduce it. Does anyone know how I did that?
I changed my shortcuts to zoom via scrolling shortly after after updating to this latest version and now it's bitching at me about dividing by zero ZeroDivisionError division by zero File "hydrus\client\gui\ClientGUIShortcuts.py", line 1238, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1129, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\ClientGUICanvas.py", line 4185, in ProcessApplicationCommand command_processed = CanvasMediaListNavigable.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\ClientGUICanvas.py", line 3971, in ProcessApplicationCommand command_processed = CanvasMediaList.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\ClientGUICanvas.py", line 2514, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\ClientGUICanvas.py", line 1563, in ProcessApplicationCommand self._ZoomIn() File "hydrus\client\gui\ClientGUICanvas.py", line 1223, in _ZoomIn self._TryToChangeZoom( new_zoom, zoom_center_type_override = zoom_center_type_override ) File "hydrus\client\gui\ClientGUICanvas.py", line 1092, in _TryToChangeZoom widths_centerpoint_is_from_pos = ( zoom_centerpoint.x() - self._media_window_pos.x() ) / media_window_width
>>3797 Damn, thank you for this report. Please go back to the old shortcut and I'll see what is going on here.
https://www.youtube.com/watch?v=FfBdUvVQpNQ windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v437/Hydrus.Network.437.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v437/Hydrus.Network.437.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v437/Hydrus.Network.437.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v437/Hydrus.Network.437.-.Linux.-.Executable.tar.gz I had a good week mostly fixing bugs and optimising. It will take a couple of seconds to update this week. all misc this week I reduced a heap of UI lag on clients that have pages with a lot of collected (like 'collect by creator') media. If you often have five or ten thousand files collected and you noticed your client was getting choppy just when running some downloaders, I hope this improves things. Let me know how you get on! I started real work on multiple local file services. Most of it is boring behind the scenes stuff, but as part of it, I overhauled the trash system this week. A heap of logic is improved, it is ready for more than one 'my files' service, and now hydrus remembers when files are deleted. Delete timestamps have never been recorded clientside, and unfortunately we cannot recover old information retroactively, but it is stored for all deletes from now on. Whenever the client wants to say 'this file was deleted', it should now have 'at an unknown time' or a nicer '3 days ago' suffix. A second neat thing with the improved deleted files storage is I hope in the nearish future to let you search deleted files. This is a rare, clever query, like 'all known files', but there will be some kind of button you can press to flip your 'my files' or 'all local files' search to go through what has ever been removed from them. 'system:file service' will get similar improvements. To reduce confusion, I renamed some hydrus network concepts across the program. The 'access key' (secret password for account), 'account key' (account identifier), and 'registration key' (one time token to create an account) are now known as 'access key' (i.e. no change), 'account id', and 'registration token'. There is more work to do here, particularly improving server setup and account management workflows to suit the user (rather than my technical ease), so I will keep at it. In a related topic, the PTR is updating its accounts. The public account is moving more towards a 'read-only' account, and accounts that can upload siblings and parents (and perhaps tag petitions, eventually) will be individual to you and freely auto-creatable in manage services. This is mostly an attempt to make janitorial decisions easier and more accurate, since at the moment everything on that side is merged due to the shared account. Permissions have not been used much in hydrus network yet, and the workflows and user notifications here are bad and buggy. Please bear with me as I iron out the problems and make it all nicer to use.
full list - misc: - hydrus now keeps a track of when files were deleted! this information has never been recorded clientside, and it is sadly not retroactively recoverable, but it is stored for all deletes from now on. on occasion, when hydrus says 'this was deleted from xxx', it will now have 'at an unknown time' or a nice '3 days ago' string attached. it will take a few seconds to update this week as the new table data is created - the 'trash' panel on review services now has an 'undelete all' button - fixed a typo error in manage services when auto-creating a service account when more than one type of account can be created - the thread watcher page now sorts the status column secondarily by next check time (previously, equal status would sort alphabetically by subject as a fallback secondary sort) - I have renamed some network concepts across the program. before we had access keys, account keys, and registration keys--now we have access keys (secret password for account), account ids (identifier for account that jannies may need), and registration tokens (one-time token used to create a new account). I hope this reduces some confusion - reduced some overhead when fetching media results for a search, and when refreshing their tags on major content updates - fixed a 'no such table: mem.temp_int_hash_id_1'-style database error state that could persist for 30 seconds or more after certain rare rollbacks - fixed the FlipFlip link html in the client api help - fingers crossed, I fixed that bad Applications shortcut in the new macOS release - fixed a couple more instances of 'pulsing' progress gauges. now they should be blank - . - more efficient updates in sessions with collected media: - several updates this week should reduce client UI lag when the session contains any pages with a lot of collected media, particularly when you are also running several downloaders (which spam all sorts of content updates across the client): - the content update pipeline now tests collections for their files before content processing, and now filters down to process just the updates in a group that apply - collections' post-content-update internal data regeneration routine now has more options for fine regen (e.g. no need for tags recalc if the update was 'archive file'), ignores updates for urls and notes (for which it maintains no summary), and only falls back to 'just regen everything' on file location changes - the 'selection tags' taglist now retains intelligent memory of its previous selection through collect/uncollect events, which reduces collect/uncollect lag on well-tagged files significantly - . - boring multiple local file services stuff: - I cleaned a bunch of old hardcoded references to 'my files' and related code. it is not very interesting, but there are a few hundred references to clean up and convert to a system that supports 1-to-n local services, and this week I started hacking away, mostly presentation stuff, labels on menus and so on - your 'my files' now has a separate deletion record to the 'all local files' domain. its count shows in 'review services', and for the moment will just be 'all local files' plus the count in trash, but this will become more important when you can have multiple 'my files' - behind the scenes re-jiggering means that the deletion record now records deletion time and original import time. delete and undelete transitions are neater as a result - logically, files are now generally no longer moved to the trash nor undeleted from there, they instead fall there when they are in 'all local files' but no longer in any local domain, and are undeleted back to a specific service. a bunch of awkwardness is cleaned up, and import/delete/undelete content updates are regeared and ready for multiple local file services - a whole bunch of little things have been fixed and changed behind the scenes. I cleaned file service code in general as I went. examples of little things fixed: - - a 'delete and do not keep a deletion record' action now correctly does not change the cached number of deleted files as reported in review services - - the 'clear deletion record and try again' 'remove from trash' component now uses a unified and improved and UI-updating 'untrash' database action, with correct service count changes and UI-side status changes - - the 'clear deletion record and try again' action on downloader import queues now handles mixes of actually deleted files and files just in trash more neatly - - in the very odd situation that you are looking at a non-local file on 'all known files' and it is then imported using 'archive on import', its thumbnail and metadata now fade in correctly as archived - added some unit tests to test the new file delete/undelete transitions - cleaned up a bunch of hacky old db SELECT code next week Next week is a 'medium size' job week. I would like to try putting some time into the ancient image rendering pipeline and related systems like previous/next file prefetch. The basic media viewer has been jank and bad at zooming for too long. I am not sure I can make it beautiful, but I will try to clean some things up. Otherwise, I am afraid I have fallen behind on some messages and other administrative work. It would be nice to put some time aside to catch up on replies, clean up my immediate todo lists, and triage some priority lists, but we'll have to see. I regret that I have had trouble recently doing anything but slinging out code.
>>3800 Thanks for the update! >the thread watcher page now sorts the status column secondarily by next check time What about the 404 threads that I haven't deleted yet? Of course I know they aren't intended to be kept open forever, but now they aren't ordered by alphabetical order anymore (which makes it harder for me to find similar threads); I think they may be in a "last update" order now? Can I request a change on this behavior, maybe the return to the "simple" statuses that we had before only for the 404 / DEAD threads? Thank you!
(2.87 MB 200x234 TerryHiThere.gif)
Please repurpose the tag sibling/parent system for url classes or allow having a single url class match multiple different schemas,ok,thanks. My main motivation is that I have a bunch of files from different nitter mirrors cause they keep dying and it's a huge pain to manually move them to a single instance
>>3789 I wanna add that when I try to close the client when the "pause/play files" button is displaying a pause button, it says "1 page says: This page is still importing". But only when I toggle it can I close the client without any warnings. So it's apparently trying to do it, but it hasn't been able to since, again, I updated to this version (436). I am only restarting for the first time since then to update to the latest version again (437).
Hydrus dev, is there any chance you can properly compensate for the ".ugoira" filetype? https://github.com/Nandaka/PixivUtil2/issues/69 https://www.pixiv.net/en/artworks/69841102 Currently hydrus treats it as a .zip file, which can't be played or even opened externally vie ctrl + E, since even though the filetype has a default setting to open in Honeyview, hydrus opens it in my .zip filetype association default anyway. I added extra stuff I set PixivUtil2 to output alongside the .ugoira file, I dunno if this post can even go through with these but trying anyway.
>>3811 I think 8chan either ate the .ugoira file or I forgot to upload, trying again.
>>3812 Ok... even 8chan treats it as .zip. Maybe I didn't need to reupload, since maybe you can just change the extension, but I uploaded the .ugoira file to my mega anyway: https://mega.nz/folder/fMdAGTrZ#iwjH3qttWolCsCrmIiRpVA I also couldn't upload the "js" file to 8chan, so that's there, too.
>>3809 My "url import" page still hasn't done anything since updating btw. Also I am still partially suffering from the same thing that started/ has maybe been happening on and off since several weeks ago, where the image preview window doesn't display anything, despite an image being selected. But the condition for it this time is the image preview window works for EVERY OTHER IMAGE except the one and only image I had open in my first page. I don't know how to explain it. The first page in my hydrus is a search query that returns a single image. I have it selected, show it show up in the image preview window. It's actually from a doujin, and if I open the doujin in a new page, literally every other page of the doujin can be displayed in the image preview window, but not the single page I had specifically chosen to have displayed in the preview window. I think this happened before, and I restarted my client to fix it. But it's very annoying. My client is unusable for 30 mins to an hour after boot. I guess I will do it again anyway.
(5.92 KB 392x193 kemono.png)
Trying to download stuff from kemono.party, it's showing most ignored, and so far nothing is showing up on the right. Under network > data > session cookies, it shows I have 3 cookies that expire in a year. According to the github, I need to import my cookie.txt for the site, where can I get this in firefox? Are there any easier workarounds for this btw? This seems like a PITA. Also is there any cookie isolation or anything for privacy concerns in Hydrus?
>>3815 >where can I get this in firefox? Btw I meant without the Hydrus companion
>>3815 >>3816 I just tried copying the values from the dev console > storage, manually into Hydrus' cookies area for kemono. Didn't fix the issue. I also tried importing a cookies.txt file from the cookies.txt addon, and it just it doesn't look like a Netscape format cookies file despite it saying it is in the addon. Either way, when I added the downloader, it already had 3 cookies in hydrus so I don't think that's the issue despite what https://kemono.party/patreon/user/18726263 says.
>>3817 I notice it's showing 404 errors now though, not 403 which it was before
>>3814 The weird image preview window thing I was complaining about here did go away for the one image it wasn't working for after I restarted. But also I was terrified of it happening again, so I didn't click on the image at all until like an hour after booting, when I clicked on another image I didn't care about first, to see if it would load. I really don't know why these weird things keep happening only to me but whatever I guess. At least my database hasn't shit the bed or anything yet.
(2.42 KB 419x166 borg.png)
Hi, this is probably not even on anyone's radar but would there be a way to separate the inbox from the archive in terms of folder structure (or maybe an equivalent clever solution)? My use case is that I backup my hydrus database daily (using borgbackup), and I have a lot of subscriptions that autodownload, then I filter/delete them later. The thing is that borgbackup (and most backup solutions) keep daily delta snapshots of the data, so even after I delete the files in my inbox, they persist for an indefinite amount of time within my backups. Over time this can lead to a lot of backup inflation by storing files that I never intended on keeping in the first place. The only other thing I can think of is maybe having hydrus itself automatically back up to a directory (with only archived etc files), and then running borg backup on that new directory.
>>3800 > hydrus now keeps a track of when files were deleted Can you also keep a track/sort by when the file was archived?
>v435 >the client api now supports wildcard and namespace tags in the file search call It just doesn't work and returns an empty file_ids all the time.
(1.79 MB 360x360 1618644690810.webm)
Hydrus does not start when using Wayland on Gnome on Fedora 34. When trying to start from the terminal, it gives this error message: 2021/05/01 09:44:38: hydrus client started Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway. (client:13459): GLib-GIO-ERROR **: 09:44:39.193: Settings schema 'org.gnome.settings-daemon.plugins.xsettings' does not contain a key named 'antialiasing' Trace/breakpoint trap (core dumped) Everything works on X.org on Gnome on Fedora 34. webm unrelated.
(1.38 MB 935x1417 consider.png)
>>3850 >Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
Heads up, kemono is broken because all the filies moved over to data.kemono.party
>>3713 You should be good. The problem I am aware of is that google and co may not respect various OS file locks that SQLite relies on, so it may cause bad data to be written while the client is active and talking to the database. Your regular backup files, which are not being actively written to in a clever way, should be fine to cloud backup. >>3716 Thank you, I will add this to the list in the Client API help! >>3717 >>3718 >>3720 Yeah, the boned count is just the total of the files in your 'my files', I think. You also have your thumbnail directory and the four db files themselves in the base install_dir/db directory. More info on the hydrus database structure is here: https://hydrusnetwork.github.io/hydrus/help/database_migration.html If you sync with the PTR, your db will blow up to going on 50GB eventually. >>3742 >>3743 Yes, I would really like to do this. Just generalise the current hardcoded archive/delete filter to a set of different processing rules and then let you change and add your own actions for custom workflows. This will take some time and work unfortunately, but it will get easier as I slowly improve shortcuts (particularly supporting multiple actions, like 'tag with "favourite" and move on to next file'). If you turn on help->advanced mode, you will see 'custom shortcut sets' under file->shortcuts. This was the original iteration of custom filters. They are basically shortcut sets you can turn on and off in any media viewer. Some advanced users mix in shortcut sets into their normal archive/delete, using shortcuts to tag files as they go. One user has rigged up a button-UI interface on a VNCing tablet hooked into autohotkey, to get comfy tablet touch processing while watching TV.
>>3760 I do not know what is going on here. I had this report, reproduced it once, and then my downloader was working ok again. I don't know if they are throwing up a new custom error page like 'please slow your downloads' or something. I will give it another look and see if I can catch the new page data that is being sent. If you want to see the downloaders, check the network->downloader components menu. There's GUGs, URL Classes, and Parsers. Some basic help on the whole system starts here: https://hydrusnetwork.github.io/hydrus/help/downloader_intro.html >>3809 >>3789 >>3814 >>3819 I am sorry for the continued trouble here. I am not sure what is causing that page to not work, but my best guess now is that it somehow got a 'poisoned' id or referral URL or something else behind the scenes, so the downloader system is quietly breaking or similar when the session loads. If you select those pending import items in the queue and right-click, you should be able to 'copy sources' to your clipboard. If you open a new url download page and click the paste button, it'll queue up again. If it is just that page that was somehow deadlocked, they'll start downloading as normal. If so, I recommend you then close the old page. If a new page is also broken, it might be one of those URLs somehow. Let me know how you get on.
>>3801 Thanks, yeah, I see the same in my client too. It is sorting by 'next check time' for everything, which still lingers for 404 and DEAD watchers, even though it is never used. I'll make sure it only does it for active downloaders. >>3802 Yeah. My original plan, a couple years ago when I made the new network engine, was to have a dialog like tags->migrate tags that would act as a 'URL manager' and let you merge and normalise everything in your database. It kept getting put off, because the implications and secondary systems needed by something that complex are something I can't tuck into normal work. I don't know the correct answer here. A different but related update will be going from storing 'http://example.com/post/123456' to ('example.com', 123456), and then URL rendering will happen according to URL Class rules at the time, which could change. At that point, it would be much easier to dupe or convert one set of stored URL location information to another domain, so perhaps this is better put off, as much as that answer sucks. >>3811 >>3812 >>3813 Yeah, the secret of Ugoira is that it actually is a zip file of images, and then a renderer adds in some optional JSON with variable frame timing info and pulls and shows each frame for the appropriate duration. It is the two-file issue that has been the biggest problem here for me. I forget if pixiv still deliver the JSON in a separate structure or if it is all bundled in the zip now (as I would do when I add ugoira support), but proper support here is going to have to wait for me to get stuck into the first phase of CBR/CBZ support, when I will be writing some sort of archive contents inspection system, so hydrus can start differentiating real zips from cbzs from ugoiras from some .docs. Sorry for the delay!
>>3814 >>3819 I am sorry, I hadn't read this properly. To address the preview image not showing and other weirdness, if you are the guy with the large session and many downloaders, I will have to reiterate that my best current explanation for your client's odd behaviour is that it is overwhelmed with tasks to perform and some thread scheduling is getting deadlocked or just choked. This could be a great explanation for why that download page is failing to start. I will keep optimising and improving my thread workers, but closing pages and pausing downloaders is your best strategy for now. >>3815 There are some Add-ons that'll put a button on your firefox bar to export a cookies.txt. There are slightly different formats, and it can be a further PITA to find one that is written so that the hydrus button can understand it. EDIT; ah, yeah <<3817 . For privacy concerns, do a quick read of the cookies.txt you are exporting--it should be human readable, and if it is just one site, it'll be a couple kilobytes at most. I just tested 'cookies.txt One Click' firefox add-on, seems to work well. Just does the current domain, one click to save file to your downloads folder, which drag and drops into hydrus ok.
>>3842 Not yet. A future system I will build will be a 'metadata to property' system, where you will say 'anything archived, give it a thumbnail with blue border' or 'anything with tag blah on PTR and rating bleh, add the same tag on my tags'. Essentially a logical construct that I can spam anywhere we want to differentiate files based on their metadata. When that is available, I'll integrate it into the file storage system, and then you'll be able to migrate files across different locations based on arbitrary metadata producing different score values. I expect the current work towards multiple local file services to play an important role here for several users as well. I think that borgbackup is probably a bad fit for hydrus's file storage. It might work great for the four client.*db files in install_dir/db, but yeah there is no way to curate a 'yes this is all good' file storage yet. >>3847 I am thinking about this. I always shy from storing timestamps for everything since they often just become bloat, but doing deleted turned out neat. The only small additional hiccup here is that the client doesn't actually store an archive, but an inbox, so I'll need a new data structure to store info for 'archived' rows. >>3849 SHIT. Thank you for this report, I will check into it. I thought I had all this working correct and tested, but perhaps I missed something with URL encoding. >>3850 >>3854 Thank you for this report. The different Linux flavours are beyond my expertise. If platform compatibility tricks don't work for you, but you are able to pull Arch packages, this will run you from source, which always improves compatibility: https://aur.archlinux.org/packages/hydrus/ Otherwise, you might be looking at running from source manually, which I have a guide for here: https://hydrusnetwork.github.io/hydrus/help/running_from_source.html >>3855 Thank you, I let the downloader guys know. kemono is pretty popular, so I should think an update will come to the github repo without a huge delay.
I had a gallery import page for importing pixiv artists by ID. The list of IDs was long and it took all of April to download it all. Just to make sure I got everything before setting up subscriptions for the list, I cleared the list of downloaded artist IDs and pasted the same list back in to grab what they had uploaded over the last month. Now I see that for some IDs, 'status' is DONE and 'items' is blank, indicating that the artist deleted his account between me importing last month and me importing now. My question is, how do I find the images belonging to deleted artists? I have the images locally from the first gallery run, but because I cleared the list of IDs, it seems the info that would let me find them is lost. The image tags do not include the artist ID. I would have to run into them in my inbox, or perhaps exclude all of the tags for the artists who haven't deleted their shit and dig through the leftovers. Fucking nips deleting their online accounts.
>>3858 >>3860 I can't really try to work around my deadlocked "url import" page right now, because I added a bunch of stuff to my "gallery downloader" page and that seems to pause for a long time due to bandwidth. But in a few days or so I will try.
>>3847 I was just thinking this the other day. I often import stupid huge amounts of files from several sites at once and then archive by site, and sorting by import time tends to screw with the "chronological" order I have in my head based on when I archived them. It also causes files in a series or alternate files to become "out of order" when a file from another site happened to be imported before all of the related files were.
>>3860 Have you tried the Kemonoparty down loader to ensure this is actually a me issue, and not an issue with the downloader? Or can anyone else test this?
ok so what do I need to set up to be able to see my archive from my phone? The client api and something on my phone to connect to the api?
(690.08 KB 1366x768 Screenshot_20210502_194446.png)
(69.54 KB 542x629 Screenshot_20210502_192516.png)
I found a bug using Hydrus.Network.436.-.Linux.-.Executable.tar.gz I just crossed the 10,000 item count threshold and the filter line "system:everything" is gone. Restarting Hydrus won't fix it.
>>3872 It's an intended behavior. You can disable it in options.
Can I remove the inbox results in the client api on the hydrus side? Blacklisting system:inbox doesn't seem to work.
Can you provide more options for sorting/viewing subscriptions? As it is it is completely removed from the standard viewing experience, and I can't tag subscriptions in any way by saying "this artist does a lot of x subject matter", or anything. I can't just view the last 30 days of imports from any given subscriptions, anything. I can't even view what images are imported under any given subscription with the current way you have to manage subscriptions- it's just lines of text and IDs.
>>3881 > I can't tag subscriptions in any way by saying "this artist does a lot of x subject matter", or anything What do you mean? You can add tags per subscription. >I can't just view the last 30 days of imports from any given subscriptions Just add a tag and search by "time imported < 30 days" and "subscription x". > I can't even view what images are imported under any given subscription Just set it to publish the subscriptions to a page, that way whenever something is imported the client creates a [site]:[query] page with all files under it.
>>3885 I don't know what you thought I meant by saying "I can't tag subscriptions in any way by saying "this artist does a lot of x subject matter", or anything." I was trying to say I can't do exactly that. I want to tag the subscription, not each indiscriminate image the subscription imports. >>I can't just view the last 30 days of imports from any given subscriptions >Just add a tag and search by "time imported < 30 days" and "subscription x". Again, I feel you're completely missing the point of my feedback. Do you realize that in reply to my feature request, you are saying "instead of functionality to do that ever being added, why don't you bootleg it with poorer tools." I wish you at least worded this by acknowledging that what I want was different, and that this is strictly worse, instead of making me have to compare them and explain why it's worse. I don't want every single indiscriminate image from a subscription to have a tag "subscription x". This is abysmal. At this point this is no different from an artist tag. Let alone that my complaint was that this is completely removed from the "manage subscriptions" window, which was my complaint. >> I can't even view what images are imported under any given subscription >Just set it to publish the subscriptions to a page, that way whenever something is imported the client creates a [site]:[query] page with all files under it. I am truly tired. I'm sorry. I don't really want to have this conversation anymore. I just wanted to sort my subscriptions, not strictly have my request ignored completely, then told that I can do something removed from what I wanted, with no context being conveyed at it being removed at all. Nothing of what you said actually has anything to do with sorting the "manage subscriptions" window at all. You didn't even acknowledge that I wanted to sort my subscriptions in any way. My subscriptions, not the images they import.
>>3880 https://hydrusnetwork.github.io/hydrus/help/client_api.html#get_files_search_files There are two attributes in the API call for this, system_inbox and system_archive
>>3874 I found the check box. Thank you.
Tangentially related, but are there any good sites to get your shit tagged? I posted a bunch of pics throughout a few boorus around two years ago and got pretty much no tags out of it. Gelbooru is slightly better but the mods are really gay.
Right now when I select "sort by time imported" and have collections enabled, all the collections are moved to the bottom of the list. Is there a way to incorporate the collection groups into the rest of the list instead of having them grouped at the bottom?
(410.63 KB 644x1054 ClipboardImage.png)
>>3913 I tried it now and it seems to be fucked. For whatever reason collections don't seem to work with some sorts like resolution ratio and time imported. It seems to just throw them out randomly either before the individual images or after.
Is there a way to specify a date range in subscriptions? I know that certain *booru based sites support the date constraint. For example you can specify date:2_months_ago..2_weeks_ago at e621, but it doesn't work at Gelbooru. Is there any universal way to do it?
I had a great week focusing almost entirely on improving media viewer performance. I have overhauled the way images are zoomed and drawn to screen and completely eliminated the additional lag and memory bloat from zooming big images. Furthermore, the caching of rendered image data is greatly improved, so flicking back and forth between neighbouring images or different zooms is, for the most part, now instant. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=LE8QKcriHH4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v438/Hydrus.Network.438.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v438/Hydrus.Network.438.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v438/Hydrus.Network.438.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v438/Hydrus.Network.438.-.Linux.-.Executable.tar.gz Hey, this causes errors if you are running from source and using PyQt5 (PySide2 is fine)! All the releases above are PySide2, so they are ok! I will fix this for next week, so if you are source+QtPy5, please hold off for now. I had a great week overhauling the media viewer's image rendering. Zooming and navigation should be a lot smoother now! image tiles tl;dr: the media viewer now zooms and navigates with less lag and flicker Zooming in a long way, particularly for large images, has been pretty hellish for as long as the program has existed. Historically, the client drew the whole image in memory at the zoom you desired so it could quickly show you the bit it needed on screen. Of course this meant zooming in to 400% on anything above 4k was suddenly taking a very long time to set up and eating a lot of memory to do it. As images have naturally grown over time, the problem has occurred more often and was starting to affect other systems. My plan to fix this has been to break the image into tiles that then render on demand. The parts of the image off-screen are never drawn, saving CPU and memory and allowing arbitrary zoom. This is a significantly more complicated idea, and rewriting the whole rendering pipeline was always expected to be a multi-week 'big job'. I originally planned to just optimise and tweak the secondary systems and add in some sanity brakes this week, but I ran a couple of small tiling tests and realised if I went bonkers it would be possible to hack in a prototype. So I did! In the media viewer, images now draw in tiles. It works a little like a browseable satellite map, where when you zoom in and pan about you see squares of data fading in (except in hydrus they appear instantly). You should now be able to zoom in as far as you like on an image pretty quick and you won't have any sudden memory needs. Furthermore, I have written a cache for these image tiles. This saves CPU when revisiting different images or zooms, so when you flick back and forth between two normal things, it should now be instant! It still takes 20-200ms to view or zoom most images the first time, but going back to that view or zoom within a minute or so should be really smooth. The cache starts at a healthy 256MB this week. I think that will cover most users very well (in screen real estate, it works out to about 35 x 1080p worth of tiles), but you can alter it under the settings at options->speed and memory. And I did some misc work improving the rendering pre-fetch logic when you browse in the media viewer. Huge files won't stomp all over the image renderer cache any more, which should make browsing through a series of giant images far less jank. If you are feeling advanced, you can now edit the prefetch timing and distance settings too, also under options->speed and memory. I am really pleased with this week's work, but there are some drawbacks: I did it quick, so I cannot promise it is good. The most obvious bug already is that at around 200-500% zoom you start to see tiling artifacts. I know what causes this (interpolation algorithms not getting full pixel neighbour data from my simple tesselating tiles) and have a plan to fix it (adding a tile border pre-resize, and then cropping). There is also an issue when the 'virtual' image exceeds about 32,000x32,000, so I hacked a zoom block for that. There may be some weird files that render with other stitching artifacts or bad tile data. Note also that hydrus's 'Animation' renderer (the soundless fallback if you do not have mpv support) does NOT use tiling yet, so it still sucks at zooming! Please let me know how you get on! If you have a steam-powered GPU or a machine with only 4GB of ram, you might like to wait for 439 so I can address any surprise bugs or performance issues. PTR and account permissions The PTR is changing how its accounts work. The shared public account is transforming to a 'read-only' account that can only download, so if you want to upload, you'll be going to manage services to auto-create your own privileged account. This is being done to improve janitor workflow for the various petitions, which were all being merged together because of the shared account. With the recent network updates, it will soon be easier for janitors to send simple messages back to these individual accounts, like 'that proposed sibling was not approved because...'. Unfortunately, various permission and account-management code has not been tested much until now, so as the PTR guys have been trying this stuff out, I have been working to improve bad notifications and workflows. This week I rounded out account permissions testing with uploading. Hydrus no longer tries to upload content the current account does not have permission for, and if you end up in that situation, popup messages now tell you what is going on. It also catches if your account is currently 'unsynced', with instructions to fix. Similarly, under 'manage siblings/parents', you can now see and edit all tag repositories (previously, they were hidden if you currently had no permission), but you get a label telling you if you don't have permission.
full list - media viewer: - I have hacked in tile-based image rendering for the media viewer. this has always been planned as a larger, longer-term job, but the problem of large images is only getting worse, so I decided to just slam out a prototype in a week. if you have a steam-powered GPU or 4GB ram, you might like to wait until next week to update so I can iron out any surprise bugs or performance problems - images are now cut into tiles that are rendered on demand, so whenever the image is zoomed larger than the media viewer window, only those tiles currently in view have CPU and memory spent on resizing and storage. as you pan around, new tiles are rendered as needed, and old discarded. this makes zooming in super fast and low memory, even for large images! - although I am happy with this, and overall we are talking a huge improvement on previous performance, it is ugly fast code. it may fail for some unusual files. it slices and blits bitmaps around your video memory much faster than before, so some odd GPUs may also have problems. I haven't seen any alignment artifacts (1-pixel thick missing columns or rows), but some images may produce them. more apparent are some pretty ugly tile artifacts that show up between 200% and 500% zoom (interpolation algorithms, which rely on neighbour pixels, are missing border data with my simple system). I will consider how best to implement more complicated but stitch-correct overlapping tiles in future - futhermore, a new 'image tile' cache is added. you can customise size and timeout under _options->speed and memory_ like for images and thumbnails. this is a dedicated cache for remembering image resize computation across images and zooms. once you have seen both situations once, flicking back and forth between two images or zoom levels is now generally always instant! this new cache starts at a healthy default of 256MB. let's see how that amount works out IRL--I think it will be plenty - I tuned the image renderer cache--it no longer caches huge images that eat more than 25% its total size--meaning these images only hang around as long as you are looking at them--and the prefetch call that pre-renders several files previous/next to the current image no longer occurs on images that would eat more than 10% the cache size. this should greatly reduce weird flicker and other lag when browsing through a series of mega-images (which before would stomp through the cache in quick succession, barging each other out of the way and wasting a bunch of CPU). in real world terms, this basically means that with an image cache of 200MB, you should have slower individual image performance but much better overall performance looking at images with more than about 5k resolution. the dreaded 14,000x12,000 png will still bonk you on the head to do the first render, but it won't try to uselessly prefetch or flush the whole cache any more - if you are currently looking at a static image, neighbour prefetch now only starts once the image is rendered, giving the task in front of you a bit more CPU time - new options for prefetch delay and previous/next distance are added to 'speed and memory' - note this does not yet apply to the old hydrus animation renderer. that still sucks at high zoom! - another future step here is to expand prefetch to tiles so the first view of the 'next' media is instant, but let's let all this breathe for a bit. if you get bugs, let me know! - due to a Qt issue, I am stopping zoom-in events that would make the 'virtual' size of the image greater than 32,000x32,000 - . - account permission improvements: - to group sibling and parent petitions by uploader (and thus help janitor workflow), the PTR is moving to a system where the public account is download-only and accounts that can upload content are auto-generated in manage services. this code has not been tested much before, and it revealed some very bad reporting and handling of current permissions. I move this forward this week: - if your repository account is currently unsynced from a serious previous error, any attempt to upload pending data will result in a little popup and the upload being abandoned - manage tag siblings and parents will now show service tabs even if the account for those services does not seem currently able to upload tags or siblngs - if your repository account is currently unsynced from a serious previous error, this is now noted in red text in manage siblings and manage parents - if your repository account does not have sibling/parent upload permission, this is now noted in red text in manage siblings and manage parents. you will be able to pend and petition siblings and parents ok - if your repository account does not have mapping/sibling/parent upload permission of the right kind, your client will no longer attempt to upload these content types, and if there is pending count for one of these types, a popup will note this on an upload attempt - . - the rest: - added https://github.com/NO-ob/LoliSnatcher_Droid to the Client API help! - improved some error handling, reporting, and recovery when importing serialised pngs. specific error info is now written to the log as well - fixed a secondary error when dropping non-list, non-downloader pngs on Lain's easy downloader import window, and fixed a 'no interesting objects' reporting test when dropping multiple pngs - added a 'cache report mode' to help debug image and thumb caching issues - refactored the media viewer code to a new 'canvas' submodule - improved the error reporting when a thumbnail cannot be generated for a file being imported - fixed an error in zoom center calculation when a change zoom event was sent in the split-second during media viewer initialisation - I think I fixed an issue where pages could sometimes not automatically move on from 'loading initial files' statusbar text when initialising the session - the requirements.txt now specifies 'requests' 2.23.0 exactly, as newer versions seemed to be giving odd urllib3 attribute binding errors (seems maybe a session thread safety thing) when recovering from connection failures. this should update the macOS build as well as anyone running from source who wants to re-run the requirements.txt. I hacked in a catch for this error case anyway, just a manual retry like a normal connection error, we'll see how it goes (issue #665) - patched an unusual file import bug for a flash file with an inverted bounding box that resulted in negative reported resolution. flash now takes absolute values for width and height next week Back to multiple local file services. Mostly more backend cleanup and prepping File Import Options and the Client API for talking to multiple locations.
>>3951 with those PTR account changes, does that mean that, going forward, all of your uploads to the PTR will be tied and linked back to an individual account? That sounds like a pretty serious privacy concern, given that tag uploads are also tied to specific files. Also, sometimes I like to make sibling or parent submissions that I know won't actually get accepted, because I just want them for myself. With this new system, will there be a way to just make a parent/child or sibling pair, then choose to not upload it at all, just to have it for yourself? I'm not really looking forward to getting messages back from the PTR janitors saying things like "you're a dumbass" because I uploaded something that I don't really care about them getting anyway. On another note: (and what I originally came here to ask) Is there a way to set watcher pages to only show posts from that watcher that are still in the inbox? I like to archive/delete files from watchers as they come in, but I keep forgetting that files that are archived stay on the watcher page, and it's annoying to have to deselect them before archive/deleting every time.
>>3953 >with those PTR account changes, does that mean that, going forward, all of your uploads to the PTR will be tied and linked back to an individual account? That sounds like a pretty serious privacy concern I also would be heavily against any features/changes if they invade on privacy. I'd honestly prefer to keep it on a paranoid level of privacy respect.
>>3860 I got cookies.txt to import into hydrus, it says it was added. I go to a new tab in hydrus > download > gallery-dl > selected kemonoparty creator lookup > entered the creator id. It's going through and says 4/58 atm. It is showing it has 2 successful and 2 ignored. The pictures are not showing up, and stuff doesn't seem to be downloading. I see no pictures appearing on the side, and it is just lingering at sending request after 4/58. Idk what's going on. I check the file import status, and I see 2 successful status, 2 ignored. The others are empty. The two that have been ignored say 404. As I'm typing this, the downloading bar jumped and says downloading 64kb/196kb so I assume it will show 5/58 if it succeed. It got stuck at 64kb though and is saying incomplete response, was expecting 196kb but actually got 112kb, retrying in X seconds. I also just tried deleting the session cookies for kemono.party in reveiw session cookies, then reimporting the cookies.txt I grabbed for kemono.party. When I import cookies.txt it doesn't add kemono.party to the list, despite saying it successfully added cookies. After giving it a few minutes, it seemed to add 2 cookies to the list, although the txt file shows 4. Idk why it took this long to show even after hitting refresh multiple times. After doing this, and now going back and readding the query to gallery-dl, it shows X/92 now instead of /92. It also seems to be going through them, currently it's at 32/92. But it's not showing any files on the right. It also says 403 on the left, and 15 successful 88 ignored 1 failed. If I check the file import status, I see a bunch of urls, half are ignored and say 403. Other half is successful and say "found X urls". And a few are errors saying: 500: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>500 Internal Server Error</title> <h1>Internal Server Error</h1> <p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> … (Copy note to see full error) Traceback (most recent call last): File "hydrus\client\importing\ClientImportFileSeeds.py", line 1187, in WorkOnURL network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1541, in WaitUntilDone raise self._error_exception hydrus.core.HydrusExceptions.ServerException: 500: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>500 Internal Server Error</title> <h1>Internal Server Error</h1> <p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> This is all extremely confusing, and idk what to do. I need to get this working soon, I'd highly appreciate the help please. Thank you.
the instagram downloader seems to not work partially. i tried downloading 3 accounts, all which i can see on the instagram website (public, not private, can see without login). 2 are saying error can't find profile, despite using the username exactly as provided in the url. the third account is working. would prefer not to post the accounts.
>>3957 just woke up, this says 52 successful, 307 ignored, 1 failed. No photos are showing up either. Import status shows 403 on all the ignored. The "successful" ones say "found X new URL's". So idky but I can't get kemono downloader to work. :/
>>3952 >next week >local file services >prepping File Import Options Does this mean that I will be able to import unrestricted file extensions and add a custom thumbnail?
>>3951 >>3952 New zoom is great, but it definitely needs some more work at high zooms, especially for the duplicate filter. See webm.
(27.48 KB 498x427 1.png)
I don't know if it should be considered as a bug, but if you have more than twelve subscriptions, their progress/buttons in message area stop stacking after the 12th one. Though judging by the bandwidth usage as well as the new images appearing in the inbox you see that the subscriptions are still getting processed.
>>3969 What happens if you right(or middle) click to close one?
>>3973 Yep, it closes the button and a new one appears in the bottom, so I guess everything just werks.
I have like 3 questions that may sound retarded but: You know how some tags have specific colors for categories (character = green, series = purple etc)? Is it possible to control and costumize that, like adding new categories or changing their colors? Are there any plans for Hydrus to turn around images or even edit them, even if it's basic MS Paint features? And even more specific but, is it possible to create a folder (that can exist outside the program) of files under a specific tag, whether or not they're copied from the program? The last one is like creating an Hydrus-like organized gallery but as a new folder you make for your desktop Like, another image folder but it was created using Hydrus and you could choose if it would actually bring the images from Hydrus itself or just copy them
>>3987 File > Options > tag presentation
Is it possible to make Hydrus generate names for files based on the tags it has when you begin a drag-and-drop export? I know I can go Share > Export > File and changing filenames to {tags} for example. But this would require me to export the files into a folder first, and then drag them from that folder to whatever place I want to export them to. I was wondering if there's a way to have the program automatically do this when you just drag-and-drop the files from the program.
This is probably an annoying question you get often, but how's Tor support on hydrus? I looked around but couldn't find any info on it. Any way to run it through Tor? Will $ torify hydrus do the trick?
>>4015 >Any way to run it through Tor? Install it on Tails, I guess.
>>4016 not ideal
>>4015 What exactly are you trying to accomplish? Most sites will block TOR exit-nodes, and using TOR with anything but the TOR-Browser will make you easy to identify for other websites (using fingerprinting). Hydrus does support a HTTP-Proxy, which is what you use to route traffic through TOR, so you could just route all your traffic through, but I really don't think you would get that much out of it... You also have to consider your DNS-requests, which would be handled by your operating system, not hydrus itself. The guys at torproject have thought about a lot of stuff that you are giving up by not using their browser: https://2019.www.torproject.org/projects/torbrowser/design/ So, if you want to use tor, use the browser too, otherwise you will be most likely identifiable.
>>4015 $ torify ./client.pyw doesn't work. Tried going to file>option>connection and adding socks5h://127.0.0.1:9050 to the http and https fields after unchecking "none" tried a thread on this v3onionsite as input but connection failed so it got ignored
>>4026 Well at least the ISP wouldn't know what's being downloaded or which specific sites are being accessed. And I want detachment of my identity from being associated with specific sites. Also, strength in numbers; the more people doing the same thing the better. I'm just interested in the automatic downloader tbh.
using v438, got this while switching images in the media viewer BufferError memoryview: underlying buffer is not C-contiguous File "hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 1778, in paintEvent self._DrawTile( dirty_tile_coordinate ) File "hydrus\client\gui\canvas\ClientGUICanvasMedia.py", line 1677, in _DrawTile tile = self._tile_cache.GetTile( self._image_renderer, self._media, native_clip_rect, canvas_clip_rect.size() ) File "hydrus\client\ClientCaches.py", line 552, in GetTile qt_pixmap = image_renderer.GetQtPixmap( clip_rect = clip_rect, target_resolution = target_resolution ) File "hydrus\client\ClientRendering.py", line 203, in GetQtPixmap return HG.client_controller.bitmap_manager.GetQtPixmapFromBuffer( width, height, depth * 8, data ) File "hydrus\client\ClientManagers.py", line 197, in GetQtPixmapFromBuffer qt_image = QG.QImage( data, width, height, bytes_per_line, qt_image_format )
Sorry I am late back to catching up here. My hydrus situation is an overloaded clusterfuck atm, and then IRL fell on my head for a bit. >>3862 Not sure if I understood correctly, or if this is helpful for your situation, but could you maybe do: Search for 'has url: pixiv', and maybe 'system:imported since 10 days ago' or whatever Collect by creator namespace For each collection, open the front image in the media viewer, then click the pixiv url in the top-right hover to open in your browser Pixiv should let you know if the image is deleted--does it also give the artist id there? If it does, you can then write it down Closer media viewer and hit ctrl+r on that collection to remove it from your list of creators to go through. It depends on what you exactly want--the actual pixiv id, or just the creators that are deleted. If you just want to know the deleted creators, then following known urls through the media should be simple. If pixiv doesn't let you see artist id of deleted content, but you still want it, I know a bunch of the pixiv ids are siblinged on the PTR, so if you sync with the PTR, try right-clicking on pixiv creator tags in hydrus and look up the siblings.
>>3867 I am going to write better thread debugging UI in the next few weeks to better profile some Client API stuff. That will also help your situation--we'll be able to see which workers in the thread pool are choked. Please keep an eye out for when I get to this--and please remind me if doesn't happen--and we can look into this and get some better behind the scenes info. >>3868 Thanks. As a side thing here: My big hope is to finally have 'modified date' population from sites (basically using some 'reasonable' minimum-seen-time logic from site post time), at which point we'll have a nice time-sort based on when files first appeared online. Hand in hand with this will have to be a way to retroactively populate these timestamps with a special downloader for this info, since so many of us long-time users have garbage modified dates for our downloaded files. >>3869 I haven't, and I am afraid I don't know much about it. A downloader guy was not sure about the new data domain and told me that the site is possibly dying, currently only on one server and looking for migration options, so he said we should step back for now and wait to see what happens. >>3871 Unfortunately it is a bit complicated for now, mostly in beta phases for experienced users to play with. If you are comfortable navigating your home network and setting up UPnP or another sort of port forward, check out the overall help and links here: https://hydrusnetwork.github.io/hydrus/help/client_api.html Hydrus Web https://github.com/floogulinc/hydrus-web is probably the 'best' option now for phone browsing.
>>3872 >>3874 >>3892 This confusion has happened a couple of times, thank you for mentioning it. I should make a little congratulations popup when you hit 10,000 files with a note about it. >>3881 >>3885 >>3886 Yeah, I will likely write a history in the database, a table of file | subscription, so you can later load up 'show me everything that came through sub x' in a system predicate. This has been off-limits for a while, since subscriptions were this really difficult to access monolithic object, but since the subscriptions breakup, everything there loads up quick and is generally accessible. It is on the back burner for now, but when I get some time to improve subs again, I'll be writing some better sub access and database integration, and also see about quick functions like a right-click menu option from a downloader page query to show 'this query is in sub x already' or 'add this to sub x' real quick, no need for the edit dialog. >>3909 Linking files to tags is such a generally technically complicated and man-hour intensive job that I ended up writing Tag Repositories (and hence the PTR) for it. I am afraid I am not active in any booru forums/communities in general, so I can't give confident advice on which is better or worse for a certain tagging style. >>3913 >>3917 Thanks, I'll check it out. It sorts the collections internally, so they'll sort by time imported, and then it tries to get an 'aggregate' value for actually sorting the collection thumbnails against each other. For stuff like file size it just gets the total size of the collection, I am not sure what it does for time imported, it might use min value, or it might be borked. I'll have a look and see if I can show what it is doing better somewhere in UI.
>>3942 I don't think so. A subscription is really basically just a gallery download page that isn't in a page. You'll have to see what meta tags the booru supports. Some sites just don't support some search tags because it adds a bunch of CPU on their end when doing the search and pagination. If you missed it, this seems to be their page: https://gelbooru.com/index.php?page=wiki&s=&s=view&id=26263 Some sites support an 'id:>123456', which can be useful to emulate a date search, but I don't see it there unfortunately. >>3953 >>3954 I am thinking about this. While these issues are an important part of the program, I also don't want to end up in a rabbit hole and end up fighting ghosts. In general, with the proposed account changes, I believe the worst privacy fears do not change, but the best case fears do. I have plans to mitigate the best case fears. I'll try and be complete with my current thoughts, sorry for going on for a long time: All I can guarantee is what the client does, and in the process of making an account, the client submits no data about you except your IP address, obviously, implicitly in your request. The server, operating as my code says, forgets that, and should never store IP address for any interaction on a tag repository. When a janitor leaves a message on your account or does any other action, they change a bunch of pseudonymous keys, the 64-character hex strings you see around the account UI, which is all random. So no one normally can determine anything about you as a person from account data on its face. The fear as you say is that mappings and tag siblings and parents being isolated to one account could feasibly identify someone al la the https://en.wikipedia.org/wiki/AOL_search_data_leak . Thinking about scenarios for this, I think it would have to be something like if you were an artist and tagged your images before they were uploaded somewhere. Or the old scenario of tagging pictures of your girlfriend. A server admin going through their database could feasibly connect your identity to your account id based on that info and link it to other submissions. The mitigation I want to add--for servers that are running correctly--is a kind of 'null' account, where all submissions of a certain age (say, 90 days kind of time) will get moved to the true anon account, all mixed up with each other. This helps some other stuff I am working on, where I want to revive an old 'superban' system that allows an admin to remove an account and all their submissions, to curtail future troll/complete-dumbass behaviour, but which is technically more difficult if I set the range of the superban back to the server's start date. There's also a timing attack with the shared key that I have already mitigated where simply you uploading things together at the same time means they probably came from the same client, so a malicious observer could differentiate different uploads into 'probably this all came from the same dude' chunks. Saved timestamps are now merged across an update period, so someone potentially picking through the database has no greater upload time resolution than an update period (for the PTR, that's ~28 hours, 100,000 seconds). So a properly running server should be immune to an after-the-fact timing attack on a sufficiently busy server, which the PTR certainly is. But of course I can't guarantee anything beyond what a client does. If a server admin is malicious, they could change the source code and capture your IP and use session data and timing attacks to isolate uploads from the shared key already. These attacks, though esoteric, have always been true of the shared account, and are ultimately probably true of any sort of server if you imagine a powerful enough adversary who could infiltrate TOR or whatever other mitigation you could imagine. Therefore, the 'worst case' privacy implications of moving to multiple accounts do not, I think, get any worse. The best case privacy implications (i.e. a server running my source code) will be practically mitigated by the merged anon account, I think. You can mitigate the IP Address attack, of course, by using a VPN. One pretty good mitigation if you are worried about individual accounts is to generate one and then share the access key with five people. All your submissions are then merged in database records. You could also generate yourself a new account every ninety days or something. My aim is to have at least the kind of privacy you get with an imageboard. Although we work on accounts rather than IP address, the accounts are nonetheless pseudonymous, so I am overall content. The best and most practical rule for hydrus, as with any site, is don't upload anything that would identify you personally. You stop being Anon if you post your credit card or face in a thread, so don't do that. Tag common files that have already been uploaded somewhere, nothing private, and no one will ever figure out who you are, even if, in an edge case, they pour a bunch of man-hours into figuring out that you anonymously like both pictures of korean pop stars and stinky feet. Let me know what you think.
>>3953 To answer your questions about siblings off the PTR, with the new siblings and parents options, you can add some siblings or parents to 'my tags' (or any other local tag service you create--you might like to create a separate empty one just for siblings and parents), and then under tags->manage where tag siblings and parents apply, you can set that service's siblings and parents to apply whereever you like, including a preference order so you can overrule (or eliminate completely) whatever the PTR wants if you like. For watchers, I am not sure you can do exactly that filter. The 'presentation' options under 'file import options' is closest to what you want, but they only offer 'new' or 'already in and currently inbox/archived', not 'new and currently inbox/archived'. I don't want to get super complicated with those options, but would it make sense to split that into 'show new/already in' and 'show inbox/archived'? That would solve your problem, since you could say 'show all, but only inbox'. Are there any situations where someone would want to show 'new and inbox' and 'already in and archived' at the same time? I don't think so, so I think I can simplify the options while also giving them more use.
>>3957 >>3965 As here >>4032 , I think the current status of kemono is in doubt, so even if the hydrus downloader actually is working great, it seems their server is giving 500 errors (which means 'serverside error') and other problems (and very slow downloads, it looks like), so it may just be dying or overloaded. Try clicking on the 'file import status' icon button on the downloader page. That'll bring up a big table window with all the import items. If the 'ignored' files have 'note' column information that is like 'could not connect' or '500 blah blah blah', you can try selecting and right-clicking them and hitting 'try again'. If they say something like 'the parser could not find anything!', then that could have been a text post or it could be the parser failing. I am afraid I do not know much about the site or the current downloader, so I can't offer any confident advice. Since it got that 58, seems like the gallery parser is working at least. >>3967 I am afraid not. The 'big job' I am currently working on is 'multiple local file services', which will be letting you split your 'my files' into separate partitions that you can search separately or merged. It'll allow you to make a client with clearly separated sfw and nsfw (and other) partitions. The next step in this job is to prep File Import Options and the Client API's file import to specify a designated local file service destination(s) for an import to go. Arbitrary file import and custom thumbnails are a big job and a medium job respectively that will come in the future. >>3968 Thank you, this is helpful. I'll see if I can improve that when I work on better tiling. There are also a couple of crashes which I will try hard to identify and fix this week. I also had to hack in the 32,000 pixel zoom limit at the last minute last week that I'd love to have a fix for. >>3969 >>3973 >>3986 Yeah, there is a hard limit of ten messages at once in the popup toaster. If you are advanced enough to have lots of subscriptions, I recommend you check out your subs' 'presentation' options in their edit panel. It can be helpful to have your subs not make the buttons but instead publish their files automatically straight to named pages, so the files populate nicely in the background without you having to click each button, and it frees up your toaster space.
>>3987 For custom file rotations, I have thought about it a long time. I already do some EXIF rotation stuff automatically, so I have some of the tools ready for it. It will likely come one day, but I can't say more confidently. To create an external folder, check out file->export folders. You can set up a particular file search in hydrus to be repeatedly synced with an external location, so you can set up cellphone wallpapers or a comic or a 'reactions' meme folder or something that is continually repopulated as you change tags or whatever clientside. There are a lot of ways of setting it up and it is powerful, so have a careful play with it. There is a complicated system for setting nice names, too, which is the same phrase system as the one in the thumbnail right-click->share->export->files. Just a thought, but in your case you could set up an export folder that moved anything with 'fix rotation' tag on 'my tags' to an external folder and then check 'delete files from the client after export'. You could then fix the rotation manually using GIMP or whatever and then re-import. >>4012 Yep, check options->gui, 'discord drag-and-drop filename pattern', I added it recently. You need the discord bugfix on (which is really just a 'copy files before export' command, which then allows the rename before DnD). >>4015 >>4016 >>4023 >>4026 >>4027 >>4028 Sorry, I don't know enough about TOR to say well. The proxy settings in the connections rely on underlying python 'requests' stuff and have been pretty borked for us in the past. I know that your typical consumer VPN solutions, which install system-wide TAP network devices, work seamlessly with hydrus, since the network bypass is being handled invisibly to hydrus system side, so if TOR offers anything like that, an OS-level redirection, maybe with an application filter on hydrus's client executable, I think that's your best bet. >>4030 Thank you! This is really useful. I got this once myself, also on a transition, and then a crash. I regret it, but it thankfully seems rare. I really want to stamp this out this week. I encountered it briefly while coding the new tiling system (basically I was resizing a bitmap in GPU space that I should have copied first), but I thought I fixed it. At least I know the basic nature of the problem. It could just be some particular kind of file, or it could be a logical rarity. I am going to chase up the whole pipeline and see where it could still be happening. Please let me know how next week's release works for you.
(37.92 KB 648x490 kemonoparty-hydrus.png)
>>4037 >Try clicking on the 'file import status' icon button on the downloader page. The ignored ones are all 403's. Retrying it just brings back a 403. The successful lines say "found 2 new URLs". No pictures/videos have been downloaded, so it seems all files are failing. Could anyone more familiar with this try to offer some help? This is something I'd really like to get working soon >.>
>>4034 >You could also generate yourself a new account every ninety days or something. I'm a little confused. You say that Hydrus servers forget IP addresses that send the request to make an account, but couldn't someone just notice something like "This account stopped uploading stuff, then right after, this new account was created then started uploading stuff" or would the legitimate flow of account creation and uploading essentially make deducing something like that impossible? Will you be able to use multiple accounts in alteration for uploading to the PTR? I could see you switching between 2 accounts to upload at random being a way to get around that. Regardless, that planned feature you mention of uploads forgetting the associated account after a certain number of days calms most of my fears. So thanks for that! So far, it seems like I'm still able to upload using the shared PTR account, despite it saying that it's read-only now. I'm not sure if you're in control of this, but It'd be cool if it could still have write permissions until that "null account" feature gets implemented. >>4035 With overriding PTR relationships using "my tags", are you able just remove a PTR relationship. e.g. just have a way to say "no that parent/child relationship doesn't apply" in "my tags" then have it over rule the PTR locally, but still leave other relationships from the PTR that you do like alone? I'm not sure what you mean by the new presentation options you're considering, but as long as they allow me to do something like "show me all files the watcher found that are in the inbox, as long as they aren't in the trash or deleted" then that'll solve my problem.
Dev anon, is there any way you can introduce the ability to import files, without duplicating them, and without moving them? Jellyfin does this I believe by simply mapping the directory location. The reason is, for some files that are larger, or directories that are organized nicely on a filesystem level, you can't really import them into hydrus because if the dir is large, you essentially duplicate it (100gb turns into 200gb, then also the backup is 300gb for the same directory). If it moves it, then it's kinda stuck in hydrus. Implementing this, would allow other programs to access the filesystem nicely, but also allow hydrus to do its thing with them. I've heard a few other people desire this as well. This is my biggest gripe with Hydrus currently really, although I love it. Thanks.
>>4049 I'm pretty sure the dev said he doesn't want to implement that because it can lead to all sorts of funky stuff. Currently you can sort of do the opposite in a clunky way with symlinks (import files, export symlinks with original names, and replace) though.
>>3858 >>3858 I'm having this problem right now, and seems like any images with the contentious_content tag won't download, even if I can see those images (when I'm logged in with my account, since they're hid from guests), and Hydrus seems to be using my credentials correctly.
I had an ok week. I improved the new image rendering system, cleaning up instability and errors and mitigating the tiling artifacts. I also fixed some other issues and optimised some database queries. The release should be as normal tomorrow.
>>4028 Again, if you care about automatic downloading, you will not be happy with TOR. There is a public list of TOR exit nodes: https://blog.torproject.org/changes-tor-exit-list-service Most sites will block you if you come from one of those IPs, and there are no workarounds for this, so you will be anonymous, but the only thing you download will be error pages. If you use HTTPS (you should), then your ISP can only see the domain you are visiting. For example, your ISP can see that you are connecting to https://8chan.moe, nothing more. If you want to have privacy from your ISP, use a VPN that you pay anonymously. Then, only the VPN provider can see what you do online, same as the ISP, but they can't link it to your real bank account. >Also, strength in numbers; the more people doing the same thing the better. Why? You can set user agents, there might be different versions of libraries out there, etc. I highly doubt that every hydrus client sends the same metadata. You would only be using up bandwidth from the TOR network. Hydrus is not built to be anonymous, and you won't make it anonymous by just piping all traffic through TOR. The same thing for torrents over TOR, see https://blog.torproject.org/bittorrent-over-tor-isnt-good-idea.>>4028
>>4054 It's 'Tor', not 'TOR'. Did you notice how the official website never once refers to it as TOR? >Most sites will block you if you come from one of those IPs That is true, but almost all of the sites I download from in Hydrus do not. >but [the VPN provider] can't link [your traffic] to your real bank account But they can link it to your bare IP address. You can't just connect to the VPN with Tor to hide that either, that would not only make you easier to track but also absolutely kill your speed. >The same thing for torrents over TOR For a completely different reason, being that Tor doesn't support UDP and that constantly connecting to many random IP addresses to transfer small bits of data over sometimes differing ports is slow. That post was written over a decade ago, so the "network can't handle it" argument wouldn't apply here, especially since Hydrus basically does one of the things that Tor was meant to do, download web pages.
https://www.youtube.com/watch?v=CyPKxkH3vB8 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v439/Hydrus.Network.439.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v439/Hydrus.Network.439.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v439/Hydrus.Network.439.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v439/Hydrus.Network.439.-.Linux.-.Executable.tar.gz I had an ok week. The new tiled renderer is improved. tiled renderer The new image drawing system generally worked well! There were a couple of bugs, and it still has some limitations, but in general it really improved zoom and precache performance. For the bugs, first of all, there was a rare crash, I think triggered by loading a very unlucky coincidence of tile and image size. Then clipboard bitmap copy threw an error, tiny images could not deal with extremely small zoom, and clients under heavy load could sometimes have trouble initialising the viewer. I have fixed them all, but let me know if you have any more trouble! There was also a problem with PyQt5, an alternative version of the Qt UI library that some 'running from source' users use. It was an object handling difference between PyQt5 and PySide2 that broke the tile caching system. I think I have fixed it, so if you are running PyQt5, please give this version a go. Beyond bugs, there were tiling artifacts visible at higher zooms. Essentially, where the tiles lined up, there were small disagreements in resize math, resulting in little lines of mismatching colour gradients along tile borders. I worked on the tiling algorithm and have significantly mitigated the problem--I mostly only see artifacts at extreme zooms now, about 2000%. Since people are suddenly zooming more, users who have mouse-centered zooming were having more images accidentally flying off screen too. I've hacked in off-screen rescue after a zoom, sliding it back to the nearest border, so the image should always stay in view. If people like it, I may patch this in for all media for dragging events too. There's not much need for non-visible media, and when it does happen it can sometimes be a pain dragging around to find where it went. I hope this basically makes the tile render a complete '1.0' now. In the future, I would like to rejigger some of the virtual geometry, since at the moment a limit in Qt means I cannot zoom higher than a 'virtual' 32,768x32,768 canvas (e.g. 4k at about 800% zoom). I'll also replicate the tiling for my native Animation widget, which displays gifs and video when mpv is not available. full list - tiled image renderer improvements: - I believe I fixed the 'non c-contiguous' crash issue with the new tile renderer. I had encountered this while developing, but it was still happening in rare situations--I _think_ in an unlucky edge case where a zoomed tile had the same resolution as the full image rotated by ninety degrees! there is now an additional catch for this situation, as well, to catch any future logical holes. - fixed a bug in the new renderer when copying an image to clipboard - I greatly mitigated the tiling artifacts with two changes: - - zoomed in tiles are now resized with a padding area of up to 4 pixels, with the actual tile cropped afterwards, which allows bilinear and lancsoz interpolation to get accurate neighbour data and have gradient math line up with neighbouring tiles more accurately - - on resize and zoom, media canvases now dynamically change tile size to 'neater' float/integer conversion dimensions to reduce sub-pixel panning alignment artifacts (e.g. if your zoom is 300%, the tile is now going to have a dimension that is a multiple of 3) - I hacked in a 'rescue offscreen media' calculation after any zoom event. now, if the window is completely out of view after a zoom, it'll snap to the nearest borders, lining against them or overlapping into a buffer zone depending on the zoom. let me know what you think! - I fixed a PyQt5 specific object tracking bug, I think the new renderer now works ok for PyQt5! - cleaned up some ugly code in the resize section that may have been resulting in incorrect interpolation algorithm choice in some situations - fixed a divide by zero issue when zooming out tiny images hugely (e.g. 32x32 at 1%) - media windows now try to have at least 1x1 size, just to catch some other weird error situations - similarly, tile and native sample sizes will have a minimum of size 1x1, which should fix issues during a delayed startup (issue #872) - cleaned up some misc media viewer and tile renderer code - . - the rest: - I started the next round of database optimisation tech, mostly testing out a pipeline upgrade. autocomplete fetching and wildcard file searching for very large queries should be a little faster to cancel now, and in some situations they should be a little faster. they may be slower for very small jobs, but I expect it to be unnoticeable. if you feel autocomplete is suddenly slow and laggy, let me know! - I optimised the basic 'ideal sibling normalisation' database query. this is used in a lot of places, so the little saving here should improve a bunch of work - I greatly optimised autocomplete sibling population, particularly for searches with a lot of tag results - I brushed up the tag import options UI: changed the 'use defaults' checkbox to a dropdown with clear labels for both modes; renamed the 'fetch tags even if' tag import options to 'force page fetch', which is a better description, and added tooltips to describe their ideal use; added tooltips to blacklist and whitelist; and hid the 'load from defaults' button if not set to view specific options - added a 'imgur single media file url' File URL Class, which points to direct file links without a referral header, which should fix some situations where these urls were pointed to by other site parsers - collections now store the _most recent_ import timestamp of their contents as the aggregate for time imported. previously they had no value, so would sort randomly with each other. collections therefore now sort by time imported reliably with each other, even if there is no 'correct' answer here - these new timestamps and service presence generally, and aggregated archive/inbox status, (all of which can update thumbnail display) is now recalculated when files are removed from the collection. so, hitting _right-click->remove->inbox_ will now update collections with a mix of archived and inboxed to remove the inbox icon immediately - as the "Retry has no attribute..." network errors have appeared in new forms, I gave the core of the problem another look. we could never really figure this out, but it seemed to be a network version thread safety issue. I think I have ruled this out, and I now believe these may have been occuring during faulty pickling during network session save/load. I fixed the problem here, so with luck this issue will not reappear--if you have had this a lot, let me know how you get on! - I broke the requirements.txt into several variants based on platform. we are going to try to pin down good fixed versions of python-mpv and requests/urllib3 for each platform - I also updated the 'running from source' help significantly, moving everything to the requirements.txt and making sections for things like FFMPEG and libmpv - Also updated the source and contact help around my work style and contact preferences - the test.py file now only does the final input() confirmation if there is an interactive stdin to respond next week Next week is code cleanup and some little jobs that have slipped through the cracks. Nothing too clever, but I want to fit in some misc boring work. Thanks everyone!
>>4030 >>4058 problem fixed on my end, thanks!
>>4058 The original PyQt5 crash is fixed but zooming into pic related(largest(in dimensions though not in filesize) picture I have) and dragging around causes the following to be spammed into console,if I exit out of the image viewer things restablize but keeping on with the zoomed drag causes a crash: 2021/05/13 01:44:34: Uncaught exception: 2021/05/13 01:44:34: ZeroDivisionError integer division or modulo by zero File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1804, in paintEvent dirty_tile_coordinates = self._GetTileCoordinatesInView( event.rect() ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1772, in _GetTileCoordinatesInView topLeft_tile_coordinate = self._GetTileCoordinateFromPoint( rect.topLeft() ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvasMedia.py", line 1759, in _GetTileCoordinateFromPoint tile_x = pos.x() // self._canvas_tile_size.width() 2021/05/13 01:44:35: QBackingStore::endPaint() called with active painter; did you forget to destroy it or call QPainter::end() on it?
>>4059 Great! >>4060 Damn, thank you for this report! I'll fix it up for next week. You'll want to restart the client after getting a torrent of that 'endpaint' stuff--Qt doesn't like to be interrupted by errors in that phase.
pic related breaks the media viewer.
Evening, friends. I'd like to ask, is there anything wrong with the danbooru downloader? When I try sending it pages through the URL or gallery downloaders, it gives me a "could not find a file or post URL to download!" error in the logs, and I have no idea what could be causing it.
>>4057 >VPN companies can see my IP What exactly is your threat model here? Do you want to hide something from the glowies or just your ISP? The VPN companies have your IP, but nothing else. If you want to have a better setup, get 2 different VPNs and route one through the other. What are VPN companies going to do with a random IP if they have no other info about you? > Torrents don't work because it's UDP I was implying that a "badly" (in privacy terms) implemented software will absolutely leak some information about you, and hydrus uses libraries that will probably have different fingerprints across different versions. Hydrus does it's own thing, it's not meant to keep you or your IP anonymous from anyone. The Tor network is meant to keep you anonymous, but that requires effort on your part too - don't run js, use the Tor browser, don't log in, etc. You can't just throw all your traffic over Tor and assume that it's private because of that.
>>4034 >you anonymously like both pictures of korean pop stars and stinky feet OH SHIT YOU'RE ONTO ME!
Bug: Hydrus won't boot OS: Mac 10.13 Version: 439 <timestamp>: hydrus client started <timestamp>: hydrus client failed <timestamp>: Traceback (most recent call last): File "~/Hydrus Network.app/Contents/MacOS/hydrus/hydrus_client.py", line 215, in boot from hydrus.client import ClientController <omitted 20 lines> File "~/Hydrus Network.app/Contents/MacOS/lib/shiboken2/files.dir/shibokensupport/__feature__.py", line 142, in _import return original_import(name, *args, **kwargs) ImportError: dlopen(~/Hydrus Network.app/Contents/MacOS/lib/cv2/cv2.cpython-39-darwin.so, 2): Symbol not found: ____chkstk_darwin Referenced from: ~/Hydrus Network.app/Contents/MacOS/lib/cv2/.dylibs/libavutil.56.70.100.dylib (which was built for Mac OS X 10.15) Expected in: /usr/lib/libSystem.B.dylib in ~/Hydrus Network.app/Contents/MacOS/lib/cv2/.dylibs/libavutil.56.70.100.dylib <timestamp>: hydrus client shut down 438 works fine
>>4040 still no werk, can anyone else test pls >.>
maybe bug? this happened one time but not other times booting gui… [GFX1-]: More than 1 GPU from same vendor detected via PCI, cannot deduce device shutting down gui…
(3.72 MB 1500x1405 Born of root and void tea.jpg)
>>4040 >>4073 403 is access denied,the server is refusing to serve you the thing.It is most likely not on your end but try grabbing cookies with Hydrus Companion if the website works for you,it might be tripping on some kind of ddos protection.
(141.19 KB 1280x720 The Target is Into Flat Chests.jpg)
>>4058 >>4081 >hydrus companion can we PLEASE submit this shit into the firefox add-ons server btw please. https://extensionworkshop.com/documentation/publish/submitting-an-add-on/
Feature request / idea: watcher auto-rules: - Set specific actions for rules that would match the domain, regex of the path or the watcher's title - Rules can (for example) auto-set tags ("if URL ~ '^\/h\/', add 'medium:drawn' to the tags"), set the checker's refresh rete ("if domain == 'boards.4chan.org' and URL ~ '^\/b\/`, set refresh rate to once per minute)", auto-remove (or open in a new page) the watcher once you get to a 404 / DEAD thread (because for example you've auto-tagged already), select a filetype to download ("no gifs, only webm")... What do you think?
Found a bug. If you hide the pages tag in the tag display for single files sorting by pages breaks. It doesn't seem to be completely random, just some fuckups along the way (e.g. pg 1, 3, 4, 5, 7, 6, 2).
>>4041 Yeah, I think there's a general problem having anon of any sort on a server with not many users. As you say, the PTR is a busy server with, I don't know, probably at least a thousand users. Trying to pull any history apart here is probably realistically impractical, but if you made a server for you and two friends, and suddenly you noticed a bunch of 'diaper' tags, it may not take the biggest database detective to unwind who of the three was doing what when. I am not sure when I can get that null account done. I am overwhelmed at the moment, and the server work is still pending a tag filter and a complete overhaul of the petitions processing system. If it helps, the null account will be retroactive. Please keep reminding me, if it doesn't get done. Even if I can't guarantee something nice and quick, I don't want it to fall off my plate. The normal user account can still upload because I had long-time fucked up some clientside UI regarding permissions, and when we tested turning permissions off for the normal account, suddenly people seemed able to upload siblings when actually the server was discarding them. It should be fixed now, but I'm not in a rush to push the PTR guys to make these changes. We had bugs with auto-account creation as well, when we tested that. Most of this code is running for the first time at large scale, so a bit of my time is reworking stuff either just to fix bugs or to work properly for humans. There's no way to choose one by one to 'un-apply' a relationship (although I've long wanted something like this, although real-world single-row management of 170,000 rows is probably impractical), but if you apply an overriding relationship on 'my tags' when it has higher precedence than the PTR, then the thing in the sibling/parent 'tree builders' will skip the logically incompatible rows. e.g. if you have siblings: my tags - A -> B PTR - A -> C You'll get A->B. Siblings are n->1, and since A is already mapped by 'my tags', the invalid A->C will be skipped over. Same for parents and cyclical trees. Basically set what you want in 'my tags', and that should come out, one way or another. I agree 100% though that the workflow and comprehension here is hell. Pairs is technically easy, but it isn't the way to view this data. If I had a hundred hours, I'd write a visual graph control and have you dragging and dropping arrow relations around and do the difficult math behind the scenes. Something I'll have to plan for in the future. The duplicate system has similar shapes and suffers from the same usability problems.
>>4049 >>4050 My best writing on this is here: https://hydrusnetwork.github.io/hydrus/help/faq.html#external_files There's some other stuff in that FAQ about filenames too, if you want to see a bit more of where I am coming from. Hydrus is a little different to something like Jellyfin simply by the sheer number of files we are normally talking about. I am usually managing collections larger than we can really get a handle on. I recommend hydrus for all mixed up imageboard-tier files, but I don't recommend it for paged comics or larger 500MB+ movies. Anything that has a simple and obvious filename schema like comics, or anything where you have less than let's say a thousand objects, like movies, you can store and manage efficiently with a human brain. Anything where there is no obvious filename and you have more than 10,000 files, the location storage is best handled behind the scenes. If you have a small number of particular files you want to keep external, I recommend you keep them that way. Advanced users can also set up some external folders that hydrus will keep populated with a subset of files (e.g. if you want to export cellphone wallpapers somewhere), and with a bit of work you can even give those files nice filenames. As I say in the help in a couple of places, if you are uncertain about hydrus, give it a go with a few hundred files, see if you like it. Many people find they don't miss external filenames at all, since the full magnitude of managing their tens or hundreds of thousands of files is such a bigger problem overall. But perhaps not. It isn't for all file types, and it isn't for everyone. No worries if you don't want to buy in to the storage system at all.
>>4051 The last thought I can offer is the User-Agent as set under network->data->manage http headers. Some CloudFlare problems can be solved by matching that to your browser as well as cookies, so maybe sank has a similar thing. I'll be adding automatic http header editing to the Client API so Hydrus Companion can copy this for you in future. It is a pain at the moment. But if it is related to that tag, definitely sounds like a credentials issue of some sort rather than the site failing. Thanks for letting me know. I'll be interested to know if you discover any more, if User-Agent fixes it or anything else. >>4062 Thank you for this report and the example file. I am sorry to say that most of these problems with the new tile viewer are based on local conditions, particularly unlucky combinations of zoom options and media viewer size. That image loads fine for me, but I imagine that 6 in the 506 is somehow creating an odd tile for you, or maybe your client has some bad metadata. Does that image crash your client, or do you get an error? If you get an error traceback, can you paste it to me here? I have several errors with '!ssize.empty()' in the main text, and I have a plan to fix it. I'll also be writing a more general error catch and try to deliver black tiles or something for future errors to put a cap on these crashes. >>4063 Sorry, it seems to work here when I do a quick simple search. Any chance you are searching for content that is only available to a logged in user, or other 'contentious content' problems like the sankaku guys are having? If you are feeling brave, you can try pasting a danbooru url into the test panel once you dig into network->downloader components->manage parsers. That'll show the full text of the 'empty' page, and maybe shine some light. Might be an error message.
(87.04 KB 1772x469 client_21-05-16 137.png)
>>4090 I'm not quire sure I did it right, but here's the result of trying to run this URL https://danbooru.donmai.us/posts/4516636?q=parent%3A4209825 Through pic related's interface, which outputted this as a result https://pastebin.com/Pwhqw5td I hope it can be of some help!>>4090
>>4069 I am sorry for the trouble here. I am not a huge expert on macOS, so I cannot talk too confidently, but it looks like a library updated somewhere, maybe that libavutil, and your 10.13 is missing some newer tech to go with it. The only thing I changed in the requirements script last week was a limit on some networking libraries, which this is not talking about, so I think it must be those libraries updated last week. Some brief searching leads to items like this https://stackoverflow.com/questions/63221290/how-to-resolve-missing-symbol-chkstd-darwin-in-libsystem-b-dylib-osx But that doesn't explain why 438 would work for you. One of the reasons I moved to the new cloud-built macOS build was because my old 10.12 laptop was aging out and unable to build anything compatible with Big Sur. Forgive my ignorance, but is 10.13 starting to be a bit old, or should it still be considered supportable? I will examine the builds and see if I can spot the differences and figure out what I can change in my build script to get you working again. I can't promise anything however, as I am still learning this. Please fall back to 438 for now, obviously.
(29.42 KB 547x488 python_RycwFdr0hg.png)
>>4091 Hmm, yeah, that's the correct thing you did. If you click that 'test parse' button, it'll run the parse and tell you what it pulled. When I do that on the page you pulled, seems like it is finding good downloadable URLs and tags. I notice in your list though that you have a couple of parsers for danbooru (that one with the '(1)' after it), so could it be you have updated your danbooru parser in the past, maybe through the Lain easy drag and drop import system? If that new parser is failing to get good info from the same data, and it is set as the parser to use under network->downloader components->manage url class links, that could be the problem here. Setting the 'link' back to 'danbooru file page parser' may get you working again.
>>4093 I have definitely imported downloaders in the past, probably danbooru ones as well. Setting the parser association with class links has seemingly fixed the issue, thank you, friend. Just to make sure, is there an easy way to get rid of any nonfunctional downloader/parser components I may have gotten in the past so I don't keep the useless data around cluttering the interface?
>>4077 Wow, thank you for this report, I have never seen it before. My best guess is that is something clever from mpv or Qt when initialising some UI stuff. I think it may be beyond my paygrade. Do you have a laptop with two GPUs or any other GPU driver issues? If it doesn't happen again, I think you can chalk it up to some weird OS blip. Let me know if you get any more info. >>4085 The guy who makes the Add-On has had a bad experience getting his stuff approved reliably and prompty, and through the official Chrome system, and I think he has basically sworn off it at this point. Although it sucks, a lot of hydrus stuff is going to be ad-hoc dev-mode ghetto code. >>4086 Yeah, I'd love a way to keep up with certain Generals. Making this would be neat, but any automatic system I make will need a lot of checks and brakes to stop it spinning out of control in edge cases. I always underestimate the work required on this sort of thing, so I won't build a system like it without reserving a lot of time to get it right. The best solution for now is to just check yourself every day and drag and drop the URL on your client, or rig up your own script that checks this data how you want and queues it up via the Client API: https://hydrusnetwork.github.io/hydrus/help/client_api.html The API will eventually get extensions to edit and remove threads and queries from downloader pages, so the ability to remove 404 will come in time. But for now you basically have to babysit them once a week or so. >>4087 SHIT, thank you for this report. I must be grabbing the wrong tag display type for this test! This actually explains a different sorting issue I was looking at today, thanks.
>>4094 Unfortunately, the system and its UI is still all shit technical prototype that I first made a couple years ago, so you have to do it by hand. You can just delete anything that doesn't work, and rename things to nicer names as needed. This is a consistent source of headaches for anyone who edits and updates things, so the next version of the downloader system will have proper unique ids for downloaders and versioning, so updates will smoothly know what to replace, ideally a way to check a remote location for updates, and better UI to handle how this all happens. The picture in my head is like how the old Nexus Mod Manager used to do Oblivion and Skyrim game mods, if you ever used that.
>>4096 Well you have a lot of plates spinning, and Hydrus is already a magical godsend I love dearly as is, so it's not like that is a problem at all, thank you! One last question that I just remembered, though, is the "Danbooru & Gelbooru tag search downloader" a native Hydrus downloader, or is that something I also downloaded at some point before? And if it is native, could you tell me if it has any means of preventing the downloading of exact duplicates between the two sites, so that only one of two exactly identical images that is found on both gets pulled? I could test it myself, but since I'm already here... Also, my experience with Bethesda modding completely predates the NMM, and from that I jumped straight into MO2, but I can visualize exactly what kind of structure you're talking about, and I wish you good luck when you get to it!
(952.18 KB 1354x699 Peek 2021-05-16 00-05.gif)
Bug found. Using Hydrus.Network.439.-.Linux.-.Executable.tar.gz I have a crash when switching from a video playing in the preview viewer located at the left button, to an image which I double click to see it in full screen. This crash keep happening ONLY with these particular files and not others, as I tried with others but I couldn't reproduce the crash. The steps to reproduce the crash are: 1 - Single click on the video file to play it in the preview viewer 2 - Double click on a image file to show it in full screen 3 - Crash
(2.25 MB 1240x689 Peek 2021-05-16 01-06.gif)
(241.08 KB 683x1063 003,712 - source.png)
>>4098 UPDATE After more tests I found that the crash can be narrowed and blamed only to the image file, as is not necessary to click any other file to cause the crash. I attach video and the guilty file.
(195.33 KB 1366x768 Screenshot_20210516_014447.png)
>>4099 Another update. This time that .PNG file gave me an error output.
>>4090 I can confirm that matching the header from the browser fixes the issue, for 24h~ at least. Not optimal, but absolutely manageable once you get the hang of it. Looking forward for the http header editing :V thanks.
>>4092 Thanks for your comment. I installed from source and so far it works. Well, I still need to migrate my db somehow, but the client runs without errors. Apple support for macOS 10.13 was ended per December 2020. All new Macs will have Big Sur which only runs 64-bit programs. As you can imagine, this breaks a bunch of games.
is it possible to migrate all tags of one file, over to a new one? for example, to append all tags from a file you have that is low quality, to the same file you recently imported but is higher quality? (without finding the file through the duplicate filter)
>>4095 >Although it sucks, a lot of hydrus stuff is going to be ad-hoc dev-mode ghetto code. This is not great imo, and should be attempted to be worked upon heavily.
can anyone with the skill and time help work on the hydrus network mode in lolisnatcher droid? even minor contributions help. The app works for hydrus, but it seems to be lacking some features (this is also due to Hydrus's client api being incomplete itself which sucks), and there's a few small bugs. The dev doesn't seem heavily interested in hydrus atm, I don't think he uses it he said he didn't like that you couldn't leave files where they are and add them to hydrus (without copying them). It's really the only FLOSS app we have for android. https://github.com/NO-ob/LoliSnatcher_Droid
>>4095 devanon, can you add some kind of PTR reference area, where you can see all your previous contributions, and see what has been accepted, or denied. Also is there a reason when you submit a tag to the PTR, it doesn't apply locally immediately so you can see the changes? Realistically, if it is further denied in the PTR it should be undone locally, (or maybe an option to keep it locally). But this all would be able to be viewed by my suggestion of the PTR reference area above.
>>4109 You can manually hit F3 on the old file, hit copy all tags, then hit F3 on the new one and hit paste tags. Wouldn't copy other metadata though. Or, you could set up the "this file is better" to move/copy all tags from the worse to better, and use the right click menu to mark one better than the other. I think that second one works, at least. >>3626 With the arguing about Tor here, what if Hydrus Companion had a "download current tab through browser" button? Anon could use the Tor Browser to open a page, then hit this button that sends the raw HTML (tags, URLs, etc) and full-size image file to Hydrus, without Hydrus actually connecting to the site being downloaded from. This works around the 'Tor not being anonymous without the browser' thing.
>>4113 >Or, you could set up the "this file is better" to move/copy all tags from the worse to better This is what I was trying to figure out, where exactly is this option though?
>>4114 >>4113 Ok, I found it it was kinda hidden. "Set as better". I changed the default settings to copy between both so the tags gt applied to both. That being said, after I set the image as better, I notice zero changes in the tags that are applied to either of the images? So I don't think it's working. I tried two pairs of images, same issue.
https://teddit.net/r/rule34/ Can anyone please make a downloader for Teddit? Reddit would work too, but I can't get it to display searches as NSFW without logging in. There is already a subreddit search downloader for Hydrus, but I want to do a specific search within a subreddit, instead of just downloading the entire subreddit. It works with simple downloader partially, if I run a search for example https://teddit.net/r/rule34/search?q=the+little+mermaid&restrict_sr=on&nsfw=on&sort=top&t=all it only downloads the first page worth, instead of going through all the pages.
>>4058 >tiled renderer Works a lot better now in the duplicate filter, at high zoom levels. The issue in previous posted webm is not noticeable anymore from what I can see. The issue still persists to a small degree in certain situations, for example when comparing an image zoomed out and an image zoomed in: Image A is larger than screen (2048x2261) and reduced in size to fit screen (1440p), image B is smaller than screen (815x900) and zoomed in to fit screen. Switching between them, the left half of the smaller image gets shifted a few pixels to the left, and the right side a few pixels to the right, compared to the larger image. They are the same images, just different dimensions.
>>>/hydrus/15114 That place is a dysfunctional mess.
>>4120 sirs right click on the picture click on save image as put name yo want for image and press ok
Why does the 4chan watcher save file-urls to the known urls list for files it downloads. Won't these urls be forever useless (or worse, end up being the url for a different file later) after the thread dies?
I had an ok week, but my work time was unfortunately cut short by IRL. I might not normally do a release, but I'd like to get some neat fixes out for the new tiled renderer, which has additional protections in 440 and should no longer crash. There's also some misc cleanup and quality of life. The release should be as normal tomorrow.
Can you save the tweet comment for a twitter image as a tag and automatically add it to images downloaded from that tweet?
>>4127 Also a timestamp?
(172.46 KB 590x434 Annotation 2019-09-14 001418.png)
Hi, I'm post # 2743 from the first general. https://archive.ph/JwSO1#2743 >>2743 (if I have my imageboard linking methods right) I'm popping in to check on whether the feature I was asking about was added. The feature was an ordered grouping of images, like an image set, or a set of variations of the same image. I called this a "gallery" of images, but that's apparently not the convention when it comes to discussing boorus and other large sources of images in general. When I asked about this, post #2744 and a dev post #2815, said that the ETA was some time later this year. It's later in the year. Has this feature been implemented in some form yet? For future reference, is there a keyword I can use to quick-search this thread to find information on this feature? I can talk about it, but having different words to describe the same idea makes quick-searching difficult.
>>4131 When Hydrus threads die, they go to >>>/hydrus/ heaven; your post is >>>/hydrus/15458 and that's why your >> link isn't working.
https://www.youtube.com/watch?v=mO1j6xx2HhQ windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v440/Hydrus.Network.440.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v440/Hydrus.Network.440.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v440/Hydrus.Network.440.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v440/Hydrus.Network.440.-.Linux.-.Executable.tar.gz I had an unfortunately short week, but I did some good work. The tiled renderer has nice fixes. tiled renderer I regret the tiled renderer, while good most of the time, had crashes when it did go wrong. To stop this with any new errors that might pop up, the whole block now has an additional layer of error catching around it. If a tile fails to render for any reason, you now get a black square, and if some coordinate space cannot be calculated or a portion of the image is truncated, the system will now print some errors but otherwise ignore it. A particular problem several users encountered was legacy images that have EXIF rotation metadata but were imported years ago when the client did not understand this. Therefore, hydrus thought some old image was (600x900) when it then loaded (900x600). In the old system, you would have had a weird stretch, maybe a borked rotation, but in my new tiled system it would try to draw tiles that didn't exist, causing our errors-then-crashes. The client now recognises this situation, gives you a popup, and automatically schedules metadata regeneration maintenance for the file. some misc You can now set a custom 'namespace' file sort (the 'series-creator-volume-chapter-page' stuff) right on a page. Just click the new 'custom' menu entry and you can type whatever you like. It should save through your session and be easy to edit again. This is prep for some better edit UI here and increased sort/collect control, so if you do a lot of namespace sorting, let me know how you get on! I prototyped a new 'delete lock' mode, which prohibits deletion of files if they match a criteria. I am starting with if a file is archived. You can turn this mode on under options->files and trash. It mostly just ignores deletes at the moment, but in future I will improve feedback, and maybe have a padlock icon or something, and eventually attach my planned 'metadata conditional' object to it so you'll be able to delete-lock all pngs, or all files with more than four tags, or whatever you can think of. new builds to test This is probably just for advanced users. If you would like to help test, make sure you have a backup before you test anything on a real database! A user has been working hard on replicating the recent macOS build work for the other releases, cribbing my private build scripts together into a unified file that builds on github itself from the source, as well as rolling out a Docker package. I have had a look over everything and we agree it is ready for wider testing, so if you would like to help out, please check out the test v440 builds here: https://github.com/hydrusnetwork/hydrus/releases/tag/v440-test-build These should work just like my normal builds above--the scripts are using PyInstaller and InnoSetup as I do on my machines, so it all assembles the same way--but we are interested in any errors you nonetheless encounter. We may need to hammer out some stricter library version requirements for older machines, since until now we've basically been relying on my home dev environments staying static until I next remember to run pip update. Once we have these working well, I'd like to use this system for the main build. It makes things easier and more reliable on my end, and should improve security since the builds are assembled automatically in clean environments with publicly viewable scripts rather than my random-ass dev machines using my own dlls, batch files, and prayers. Who knows, we may even defeat the anti-virus false positives. Also, if you would like to try the Docker package, check it out here: https://github.com/users/hydrusnetwork/packages/container/package/hydrus I don't know much about Docker, so while I can't help much, I'll still be interested in any feedback. If and when we are ready to switch over here, I'll be updating my help with any appropriate new backup instructions and links and so on.
Please also remember that running hydrus from source is also always an option: https://hydrusnetwork.github.io/hydrus/help/running_from_source.html As we went through this process of automating builds, we've improved the requirements.txts, and I've learned a bit more about automatic environment setup, so I hope I can also add some quick setup scripts for different platforms to make running from source and even building your own release much easier. full list - tiled renderer: - the tiled renderer now has an additional error catching layer for tile rendering and coordinate calculation and _should_ be immune to to the crashes we have seen from unhandled errors inside Qt paint events - when a tile fails to render, a full black square will be used instead. additional error information is quickly printed to the log - fixed a tile coordinate bug related to viewer initialisation and shutdown. when the coordinate space is currently bugnuts, now nothing is drawn - if the image renderer encounters a file that appears to have a different resolution to that stored in the db, it now gives you a popup and automatically schedules a metadata regen job for that file. this should catch legacy files with EXIF rotation that were imported before hydrus understood that info - when a file completes a metadata regen, if the resolution changed it now schedules a force-regen of the thumbnail too - . - the rest: - added a prototype 'delete lock' for archived files to _options->files and trash_ (issue #846). this will be expanded in future when the metadata conditional object is made to lock various other file states, and there will be some better UI feedback, a padlock icon or similar, and some improved dialog texts. if you use this, let me know how you get on! - you can now set a custom namespace sort in the file sort menu. you have to type it manually, like when setting defaults in the options, but it will save with the page and should load up again nicely in the dialog if you edit it. this is an experiment in prep for better namespace sort edit UI - fixed an issue sorting by namespaces when one of those namespaces was hidden in the 'single media' tag context. now all 'display' tags are used for sort comparison groups. if users desire the old behaviour, we'll have to add an option, so let me know - the various service-level processing errors when update files are missing or janked out now report the actual hash of the bad update file. I am chasing down one of these errors with a couple of users and cannot quite figure out why the repair code is not auto-fixing things - fixed a problem when the system tray gets an activate event at unlucky moments - the default media viewer zoom centerpoint is now the mouse - fixed a typo in the client api with wildcard/namespace tag search--sorry for the trouble! - . - some boring multiple local file services cleanup: - if you have a mixture of trash and normal thumbnails selected, the right-click menu now has separate choices for 'delete trash' and 'delete selected' 'physically now' - if you have a mixture of trash and normal thumbnails selected, the advanced delete dialog now similarly provides separate 'physical delete' options for the trashed vs all - media viewer, preview viewer, and thumbnail view delete menu service actions are now populated dynamically. it should say 'delete from my files' instead of just 'delete' - in some file selection contexts, the 'remote' filter is renamed to 'not local' next week I had a run of IRL stuff eating my hydrus time, but I think I am now free. I'll catch up on smaller work and keep grinding at multiple local file services.
(46.88 KB 640x360 43f7.jpg)
>>4136 >the tiled renderer now has an additional error catching layer for tile rendering and coordinate calculation and _should_ be immune to to the crashes we have seen from unhandled errors inside Qt paint events I'm the anon who reported a crash at >>4098 >>4099 and >>4100 I report that the crash has been effectively fixed in v440. Thanks OP.
(51.89 KB 297x261 Harada Good Display.png)
>>4136 Can report >>4060 is fixed.
>>4127 Never mind, figured it out.
Whenever I open a new media viewer instance I get a popup message in the bottom right that says "A window that wanted to display at "PySide2.QtCore.QPoint(-960, 9)" was rescued from apparent off-screen to the new location at "PySide2.QtCore.QPoint(10, 10)". Anyone know how I can prevent just this pop-up from coming up? Or fix whatever the underlying issue is.
>>4112 >devanon, can you add some kind of PTR reference area, where you can see all your previous contributions, and see what has been accepted, or denied. Yep, it would be super cool to have it.
(51.04 KB 504x490 smug.jpg)
>>4142 Switch to PyQT5 :^)
How would i go about adding resolution ratio to a custom sort by tags?
(12.93 KB 303x478 ClipboardImage.png)
what the hell happened? there's no phillomena logins anymore for derpibooru, twibooru ect, things were working before I updated from hydrus 330s to 340s.
(209.47 KB 1872x1033 1.png)
(206.78 KB 1872x1033 2.png)
(248.78 KB 1872x1033 3.png)
>>4032 >I am going to write better thread debugging UI in the next few weeks to better profile some Client API stuff. That will also help your situation--we'll be able to see which workers in the thread pool are choked. Please keep an eye out for when I get to this--and please remind me if doesn't happen--and we can look into this and get some better behind the scenes info. I have some really annoying/unfortunate feedback. First of all, the reason I took so long to get back to you was because I kept adding stuff to my gallery import page, occupying all my bandwidth, so most of that time was spent with hydrus not doing anything, because it ran out my bandwidth cap, I guess. But today I was on my last one, so I figured I would do what you said, and paste the urls in a new page. It worked (and it downloaded all to finish instantly). But, also, I unpuased the deadlocked url import page, and it imported there, too. First of all, I'm still on version 437, and I see the latest is now 440, released 4 days ago. Had I read your reply first, I would've tried updating first, in case the "thread debugging UI" you spoke of got added. But secondly, I wish I unpaused the deadlocked url import page first, instead of copying the URLs and pasting them in a new url import page. Maybe it didn't matter either way, because upon noticing they were frozen, I downloaded the images via a gallery page, so maybe that was why it worked in the previously deadlocked page. But then for good measure as I was formatting this, I added a new URL not currently in my database, and it worked in the same previously deadlocked page. So there doesn't appear to be any problem anymore, somehow. I remember my hydrus crashing once between my original post and now, since my 5400RPM hard drive decided to render me unable to do anything, despite being able to move the mouse etc., for over a minute, while a video was playing, and I had an input (or several) buffered. So when it unfroze, hydrus just crashed. Maybe having to boot hydrus again was what made the page work. I don't know.
>>3860 >To address the preview image not showing and other weirdness, if you are the guy with the large session and many downloaders, I will have to reiterate that my best current explanation for your client's odd behaviour is that it is overwhelmed with tasks to perform and some thread scheduling is getting deadlocked or just choked. This could be a great explanation for why that download page is failing to start. I will keep optimising and improving my thread workers, but closing pages and pausing downloaders is your best strategy for now. I just updated to version 440 (latest) from 437, and I think this stopped happening to me (the preview image not showing up). It didn't show up for a few minutes, but now it does. I remember it not showing up ever, not showing up for hours, only showing up for every image but a specific image, etc. I remember having to avoid having an image selected as the client booted, because that seemed to prevent it from being able to appear in the preview window until I restarted and tried again. So this is much better than before.
Hey, sorry, I am late again! >>4097 The Danbooru and Gelbooru thing is actually a clever thing, something the advanced users have played with, and a system that sometimes steps in for sites like Hentai Foundry, that actually combines multiple single sources you already have listed. I call the object that turns 'samus_aran' into a search URL a Gallery URL Generator (GUG), and then another system allows you to bundle these together into a Nested GUG (NGUG). That D&G NGUG is just the default danbooru and gelbooru downloaders--when you enter 'samus_aran', it queues up to starting search URLs, rather than just the one. I suspect future versions of the program will do more in this realm. Many experienced users pull a set of particularly favoured creators from five or more sites at once, trying to catch things that fall through the cracks on one site or another. But as you suggest, figuring out how to save bandwidth on files you have already is a big issue. There's a little background info here: https://hydrusnetwork.github.io/hydrus/help/faq.html#hashes Basically, there are mathematical ways of id'ing files using short names, called hashes. You may have heard of 'md5'. Most of the bigger boorus offer md5 or another hashing standard on their file pages, and hydrus downloaders will generally pull this info. After downloading the web page, Hydrus can then cross-reference with its private store of known hashes and figure out if the file is 'already in db' or 'previously deleted' or 'possibly new', and in the former two cases, skip the actual file download and move on immediately. In hydrus, this works if the files are byte-for-byte duplicates. One changed pixel, or any optimisation or metadata stripping, will not match in this system, and you'll get two very similar files that hydrus will eventually start to detect as similar looking duplicates in the 'duplicates' system (which is ok but mostly unfinished, if you haven't checked it out yet). Hydrus does a similar check, but slightly less confident logic, or previously visited URLs. Run the same search on the same site twice, the second time it should blitz through, only downloading the original search ('gallery') pages. If you are interested, click the icon button on your download page to see the 'file import status'. Anything with 'already in db' or 'previously deleted' will have a little note next to it describing how hydrus made that decision. On typical booru or imageboard downloads, a mature client will get some nice hash and url matches.
>>4098 >>4099 >>4100 >>4137 -and- >>4138 Thank you for these reports! I now have a couple occasional thin black bars to fix here, but in general, the crashes should be gone. I am sorry for the trouble. >>4104 Check out here for more hydrus db background reading: https://hydrusnetwork.github.io/hydrus/help/database_migration.html If you moved from the App, your old db is going to be something like ~/Library/Hydrus , but running from source it will try to stick it in install_dir/db. Everything is completely portable, so the basic routine is: Go to install_dir/db and move any the client*.db files and the client_files directory somewhere safe. (you can delete it later if this all works ok) Copy that stuff from ~/Library/Hydrus to install_dir/db Boot the client. (it should now be running off your old db) But that document will give you more options depending on your situation. Many advanced users use the -d launch parameter to move their db folder location around. Always good idea to make a backup of everything before you do any migration, just in case.
>>4110 I'm an idiosyncratic guy working strictly alone on some fringe imageboard software, I am afraid I just cannot promise too much professionalism. While I am slowly improving my own coding standard, my weird workflow works well for me, and I haven't yet fully burned out in about eight or ten years of banging out weekly releases, so I am happy to stay at it. If you highly rate a pleasant/easy experience, you may have a better time looking elsewhere. >>4112 >>4147 The short answer is: maybe, in the future. The long answer is: if I do this, I'll probably have to do it by remembering a copy of what you upload, or flagging it somehow specially in the database. I've designed the PTR to generally merge and anonymise content uploads, so even where there is some record of which account did what, there isn't a record of which computer did it, and the database tables aren't tuned for those requests too much. But having a clientside memory/copy of uploads is already in the back of my mind as it will solve some other problems as well. I'll likely do work on a petition workflow sooner, something like 'you uploaded this tag sibling ... and it was (accepted/denied/got a note attached)!' The PTR janitors want this, to get some more convo back and forth, so it will be a logical extension to have your client being involved in that. >Also is there a reason when you submit a tag to the PTR, it doesn't apply locally immediately so you can see the changes? This actually should be behaviour for all uploads. You should see 'samus aran (+)' become 'samus aran' when you commit. Do you get otherwise in some cases? Tag siblings and parents still have some borked logic with some of this, and awful UI to work with, but they should generally apply when you set pending and perhaps a little more firmly when you upload. >>4113 That TOR thing sounds like it would plug into the Client API best. The TOR Add-on or whatever (like Hydrus Companion) could do whatever in the browser environment, then make a connection to localhost and spam files and metadata at it. Not sure about Hydrus starting the action though. Given how much work all this is, I may just get around to improving the per-domain proxy settings here first. There is probably some proxy daemon you can run on a system that'll pipe certain requests to the TOR service that'd solve this better, given how jury rigged other solutions sound. >>4114 >>4115 If you end up doing this a lot, check out options->shortcuts->media, there should be some commands 'file relationships: xxx' that'll let you fire this off on the current selection, making the current highlight (the one in the preview viewer) the 'best', with just a key press.
>>4120 Thanks. I am glad it is working better. I'll get my Animation canvas working on the same system and finally merge my interpolation code, and then review more options and little alignment fixes. >>4125 Hydrus remembers all Post/File URLs associated with a file download. How that URL is treated and whether it appears in human-friendly lists generally depends on the 'URL Class' definitions under the network->downloader components menu. Anything that doesn't match any URL Class is generally kept but hidden (treated like a 'default' File URL). There is an option not to store, so you could try defining a URL Class for each File URL type in your downloader and then explicitly telling hydrus not to save it, but in general this is more work than it is worth. Most boorus have persistent storage URLs, so it can be useful at times to have that raw URL to rely on for URL logic checking, but then also sites sometimes just change CDN and the whole file format changes, so they aren't super easy to pin down long term either. So pretty much, in 4chan's case: it doesn't really matter that the download is tokenised by the timestamp, so expect that's why it didn't get explicitly removed. It is probably an oversight. It definitely wasn't a conscious choice. My next big plan for URLs will be en masse database URL control. I want you to be able to choose a URL format and mass transform all instances of it (e.g. a clever merge of all legacy http records to https), so if this becomes a storage bloat issue, we'll likely deal with it then, when we have a nice tool to deal with it. >>4127 >>4128 Note parsing is coming hopefully this year, as is pulling post times from sites into the 'modified' date for downloaded files. >>4131 Sorry, nothing nice yet! Q1 was some emergency server fixing work, and now in Q2 I am doing multiple local file services. I should be able to fit in two more 'big' jobs for Q3 and 4, which I will poll the users for, and I still expect the file alternates expansion to win one of those votes. Can't promise anything though, everything is on fire all the time and everything takes longer than one expects.
>>4142 >>4148 Thanks, have you changed your monitor layout recently? You'll have to dive into some hellish UI here, but check options->gui->frame locations. There's a list there with some deeper settings you can edit. When that window is booting, it is using the settings for 'media_viewer', presumably it has 'remember position' False but a saved position that is borked. See if there is an easy fix there to get it remembering the last set location again or something, and it shouldn't try to spawn off screen again. If the fix isn't obvious, or it keeps going in the wrong place, please let me know. It could just be my coordinate calculation is messed up in your setup for some reason. Stuff like multiple monitors differing UI scales can be a pain in the ass to get correct here. >>4149 You can't mix the metadata sorts with the tag sorts yet. But I've been working on this stuff recently and I'm seeing a path opening up. The solution for your specific case is probably going to be the editable 'secondary sort' I'll soon be exposing on all pages. >>4150 Hmm, I am not sure. That dialog should give you all your login scripts, even if one is validity borked in some way. If you check network->downloader components->login scripts, is it gone from there? I haven't touched the login script defaults in quite some time, so I am not sure if I did this by accident in some update. EDIT: Looking again at your versions, are you actually 330? If this is two year old updates, I'm afraid I can't remember what was going on. I may have cleared out old login scripts that don't meet some criteria, or some bitrot update may have done the same. Best solution is re-import the login script in the dialog above. If this is 430->440, I don't think I did it. Same thing though: re-import the login script, see if it maps ok. We'll have to chalk it up to one-time weirdness for now. Let me know if that doesn't work, or if it happens again.
>>4158 >>4159 Yep, sounds like the thread workers were choked from lots of things going on at once. Once the client slowly had its background work reduced, whatever was soft/deadlocking that downloader page's worker freed up. I don't have the thread debug UI available yet. I'm afraid I am spread thin a bit right now. I am glad you were able to clear things out of your session. I should also have the big session breakup ready for v442 in a couple weeks. EDIT: Now I think of it, the new tiled renderer and image neighbour pre-caching logical improvements may have cleared out some image cache-related CPU waste in your situation, which may be why preview viewer stuff is working better for you overall. I'll continue to be interested to know how things change in future for you! The GUI session breakup should be a relief for a lot of big client users.
(16.23 KB 481x265 ClipboardImage.png)
>>4174 >network->downloader components->login scripts doesnt exist at least in this menu. I would update from time to time usually months apart because it usually doesnt have a huge noticeable difference and I was pretty content with its function and stability. I was using 422 before this and downloaded it around nov last year, so half a year ago, it warned me about datarot with updating 10+ versions so I went and updated to 330s then 340s
(14.16 KB 1106x74 ClipboardImage.png)
>>4186 >>4174 woops i meant 430s. the old exe file was still in my trash after updating and delteing. in either case, the logins are nuked and I have to rebuild them but have no idea what im doing here. think i need a cookie but i dont know where to get it
I had an ok week. I did a mix of different small work and added some new commands to the Client API, including something that should help some difficult login situations. The release should be as normal tomorrow.
Upgrade warning for users on Linux using releases – this week, a new directory structure has appeared. We went from hydrus network/<content> (no caps, classic Hydrus dev) to ubuntu/Hydrus Network/<content>. This broke my upgrade process; don't let it break yours! Hydrus dev, is this intended / meant to stay like this or is this something that'll be fixed next release?
>>4200 >>4201 Sorry about this, I only realised it was still in this morning when I was doing an extra compatibility check! This is one final thing I need to update in the build scripts, but I didn't want to change it last second for master and then screw up. I'll make a branch next week and do a test side build. I will move it back to 'Hydrus Network' as the directory at the base. I hadn't realised the Ubuntu build was lower case. The Windows and macOS builds have always used caps--you can see how messed up my old mess of scripts was. I'll harmonise to 'Hydrus Network', please check for it next week.
https://www.youtube.com/watch?v=EJLNLWv-nmM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v441/Hydrus.Network.441.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v441/Hydrus.Network.441.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v441/Hydrus.Network.441.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v441/Hydrus.Network.441.-.Linux.-.Executable.tar.gz I had an ok week. Not as much as I wanted, but there are some nice Client API improvements. all misc this week The test builds from last week seem to work ok, so they are now master. The built clients now use Python 3.8, and the security libraries (like OpenSSL) are all much newer--and will reliably stay up to date in future--so a whole bunch of things across the client should have slightly better performance. There are no special install instructions, they seem to work on an existing install just as normal. Let me know if you do run into any problems! I fixed some more bad tiles calculations for the new tiled image renderer. Some files that seem to have little black lines on an edge at some zooms, or previews that just turn up black, should be fixed! Error reporting is also nicer. The Client API can now do a couple more things. Particularly, it can now set your client's global User-Agent, which should help fix some difficult CDN and login problems in future. Please watch this space. For advanced users, if you have help->advanced mode on, then setting a namespace file sort now allows you to choose which 'tag context' the sort works on. If you hide certain tags in single or multiple media view (as set in tags->manage tag display and search), then those hidden tags will not count for the sort. This is obviously advanced, so if you hadn't thought of it, you can just set 'display tags' to keep 'normal' behaviour. full list - misc: - after successful testing, all the master builds are now made on github rather than my home dev situation. the clients now work off python 3.8, and several security libraries (e.g. OpenSSL) are now always going to be latest, so there should be several quiet performance and reliability improvements across the program. there are no special install instructions--normal update seems to go fine--but let me know if you do have any trouble. big thanks to the user who did the leg work on developing the workflow build scripts here - if you are in advanced mode, namespace file sorting now allows you to set the 'tag display context' on which it will sort. this appears as a new menu button or a button list selection dialog wherever you edit namespace file sorts. if you are not in advanced mode, the default is the 'display tags' I switched to last week (i.e. before any tags are hidden by your tag display options) - namespace sort has some related code cleanup. the 'defaults' object is updated and moved to the newer options object - the new tiled renderer now checks for rounding errors in zoom calc, which in some cases was giving a single extra (non-existing) native pixel row or column on rightmost or bottommost tile samples - the new tiled renderer now double-checks clip regions for validity before attempting to crop - improved the reported error information when a tile fails to render - when pasting an uneven number of tags into manage siblings/parents, the error is now a nicer popup dialog. I'm pursuing a related error here--if you get this a bunch, please let me know what more info you discover - when repositories fail to fetch the update hashes to process, they now force a metadata resync. any processing error should force a metadata resync now - added a default url class for the new pixiv _artist_ page format - fixed a recent typo bug with ipfs pinning - . - client api additions: - the client api has a new /manage_headers/set_user_agent call, which is a simple hack for now for external programs to set the 'Global' User-Agent. it should allow for some CloudFlare solutions when just copying cookies is not enough - the client api has a new /get_services call, which talks about more services and also exposes service_keys for the first time, which are likely to be useful in future. check out the help for an example. the old /add_tags/get_tag_services call is now deprecated, please move to the new call - the client api /version call now responds with 'hydrus_version' as well, which this week will be 441 - the client api now has a semi-experimental /manage_database/lock system, just like the server's. a new 'manage database' permission is added for this. don't play around with this system idly. - the client api should now support sha256 hash parameters if they start with a type prefix like 'sha256:0123789abcdef...' - the client and server's database lock commands now wait up to five seconds for the database to finish disconnecting to respond - expanded client api unit tests to cover the above - the client api version is now 17 - . - boring multiple local file services work: - the main search object now stores the file domain using a new 'location context' object that will in future hold multiple file services and can say whether we should search files currently in a domain, or those once deleted from it. a variety of back-end search code has been updated to deal with this more flexible situation - removed more static references to the single 'my files' domain in db and related code. in a couple places, like mr. bones, it now fetches 'all local files', but this will likely be updated in future to a new umbrella 'all non-trash, non-repo-update-files local files' service next week I've had some real trouble keeping up recently, but that's ok. A bunch of it is out of my control, so I'll keep pushing anyway. Next week is due to be a 'medium' job week, and I would like to break up the gui session object into smaller pieces. Instead of saving the whole thing, it'll track and save and share individual pages. This will greatly reduce the random CPU lag and HDD use on any client with a large session, let crazy users to store more than 500,000 files in a session at once, and allow us to save changes more often. Basically the same improvement I made to subscriptions and the network objects in the last year, but for gui sessions. I'm due to take my vacation week in two weeks, so I'll aim to have a simple 'clean' release week after next.
If you use a script to automatically extract the Linux or Windows release, check this >>4200 . Sorry for the trouble this week!
Linux client and server binaries also don't have execution permissions, so they don't start unless you set it.
can a focus be made on the client api in the next update or two? it's a big part of hydrus imo, as it's what users do to view their files. if they're not looking at it directly in the client, they're looking at their files through the client api. it seems to lack a lot, for one proper security, and a guide for setup over nginx reverse proxy. it doesn't reply with tag types, system tags can't be searched, tags don't get replied (for suggested searches in apps, like if I type "lin" and there's a tag for "linux", "linux" would show up to click on). I also would focus on any other networking security holes or concerns that may be related when the client api gets exposed. with that update, the client api can at least he setup securely outside, people have a decent guide to follow with nginx template, and it's secure. i'd love to be able to control hydrus network client from another machine running hydrus network client too. i'd also suggest considering changing the name of the programs, because hydrus network server seems to confuse people and make them think that is what is needed to run the client api when it's not. there's also not a good section in the docs actually going over hydrus network server.
any thoughts on providing a script downloader and parser list like the one on the github, but build it into hydrus so it can remotely grab the list alongside the ptr updates, and then users can see all available scripts and just download them from within hydrus instead of needing to import them? they'd also be able to check for updates, and update scripts with this.
>>4210 sirs to download you do press right click end select save - as button you put name of file you save want and select ok
what's the current size of the ptr? i'm getting nowhere fast tagging by hand but I don't want to have more room taken up by fucking tag mappings than images
>>4221 Mine is a bit over 50GiB.
>>4221 >>4222 I really wish there was some way to have the PTR only have mappings for files you have, so that you don't need to carry around the mappings of every file that was ever on it, just to get the mappings of the 2,000 or so files you actually have. Maybe the PTR could download everything, but then just delete the ones that apply to files you don't have, that way there won't be a privacy issue. The problem with that though is about what to do if you get a file that Hydrus already deleted the PTR mappings for. If it asked specifically for those mappings now, it would be a privacy issue again, but if it didn't, then having the PTR is useless for that file now.
(17.05 KB 785x406 db.png)
>>3626 to migrate my entire hydrus database to a new drive due to lack of space on the current, is this the correct procedure (the options are confusing, as are the docs). Also why does one show portable and the new drive doesn't? Go to migrate database dialog, add a new location for files, select the new desired drive and folder, then click what? Do I need to set the weight or anything? Or just hit "move entire database and all portable paths"?
>>4223 Developer-kun said that at some point he'll make some changes resulting in the DB deflation. I doubt though it'll become that minimalistic.
>>4175 I'm the same guy you responded to here. Unfortunately, after updating to latest (441), the same thing I was describing before happened, where image previews didn't work for a long time on boot, and when they did eventually work, the search I had open in my first page (which is only a single image I have selected, to view in the preview window, as a homepage of sorts) doesn't show up in the preview window, despite being selected. But every other image shows up in the preview window when selected. So all I can do is restart my client and not select any image until it finishes booting, again, because after booting, given my 5400RPM HDD, the client lags, and buffers inputs for several seconds before doing anything, it's weird. It returns to normal after about 30 minutes since booting.
>>4209 Some other idea that would be nice for the API: Get relationship information for a file Create a new blank page Set and modify the search query of a page Move/reorder a page Refresh a page Not sure if this is possible, but it would be nice if the API could return the total number of occurrences of a tag in the database without having to do the more expensive and slow metadata search.
>>4223 I think this would be a privacy unfriendly way to do this, as you'd have to hash match every single file you import remotely. Hydrus doesn't do this currently correct (remote hash matching), I assume the point of the local db is to have it all localized, yes?
>>4209 >>4228 >tags don't get replied (for suggested searches in apps, like if I type "lin" and there's a tag for "linux", "linux" would show up to click on). this. lolisnatcher app is great, but I can only search for tags i remember because of this issue. also when this is implemented, i hope it recognizes tag siblings and stuff. for example if I type person:anya taylor or anya taylor, but in hydrus the master is set to person:anya josephine marie taylor-joy, it should still show up as a suggestion when i'm typing person:anya taylor or anya taylor. This way you don't have to worry about typing in the wrong sibiling spelling and the tag not showing up or some shit.
please add a search bar to the select gallery (for downloader). it's impossible to find stuff when you have many downloaders installed.
>>4222 Damn, probably still not worth it then, I only have 25gb of files.
>>4232 You'll lost on time spent to manually tag all your files, if not on disc space.
devanon Ran into a problem, hoping you can provide some insight. I was running version 430 of Hydrus. I did a full system update (Arch linux) and now I get an error when launching Hydrus. I am running Hydrus from source, and it was working fine. I tried using the latest version of Hydrus, but I still get the same error when launching the program. Here is the error message: :0: UserWarning: You do not have a working installation of the service_identity module: 'No module named 'service_identity''. Please install it from <https://pypi.python.org/pypi/service_identity> and make sure all of its dependencies are satisfied. Without the service_identity module, Twisted can perform only rudimentary TLS client hostname verification. Many valid certificate/hostname mappings may be rejected. 2021/05/29 15:06:54: hydrus client started 2021/05/29 15:06:55: hydrus client failed 2021/05/29 15:06:55: Traceback (most recent call last): File "/mnt/hydrus/hydrus/hydrus_client.py", line 215, in boot from hydrus.client import ClientController File "/mnt/hydrus/hydrus/client/ClientController.py", line 28, in <module> from hydrus.client import ClientCaches File "/mnt/hydrus/hydrus/client/ClientCaches.py", line 15, in <module> from hydrus.client import ClientFiles File "/mnt/hydrus/hydrus/client/ClientFiles.py", line 21, in <module> from hydrus.client import ClientImageHandling File "/mnt/hydrus/hydrus/client/ClientImageHandling.py", line 6, in <module> import cv2 ModuleNotFoundError: No module named 'cv2' Any insight or feedback you could provide to help troubleshoot or resolve would be greatly appreciated.
>>4233 I have obscure enough tastes that I'm not really sure how much of it would even get hits from ptr to begin with. I used iqdb-tagger and it only managed to tag about 5% of what I gave it(, and with such shitty tags that any time saved was spent cleaning them up and setting up siblings).
>>4186 >>4187 Sorry, it is now moved to the 'logins' section of that menu, just below 'downloader components'. I looked through my update code since 430 and there doesn't seem to be anything related to login scripts. I really haven't touched this system in a while, let along three months. I am sorry for the trouble. If the login script is a 'normal' one, you shouldn't need a cookie, just your normal username/password, which you put in the login dialog, also under that 'logins' menu. Basically: (manage login scripts) -add the script (manage logins) -link it to the domain if it isn't already -set your username/pass and activate it -it should make sure you are logged in whenever you get anything from that site >>4208 Thank you! This is fixed for next week. Sorry again for the trouble. >>4209 >>4230 >>4228 Thank you for your comments. Unfortunately I am always crushed for time, so I can't get much done at once, and I always have five big fires to put out, so it is difficult to schedule focus on one area, particularly something that can be iterated on in small parts. I've been trying to fold little Client API updates into my regular weekly work. Wildcards got added to search recently, and I improved service fetch and the User-Agent call this past week. I am not happy with my current rate of Client API work, so I hope to kick out more updates regularly. If you are interested, you can see how badly I am doing at keeping up with this nice masterlist: https://github.com/hydrusnetwork/hydrus/issues/656 As you say though, I can prioritise, so the next thing I'd like to push on is tag autocomplete (typing 'sam' and getting 'character:samus aran' and other results).
>>4210 Yeah, that's my dream update for the next big network engine push. If you ever used Nexus Mod Manager with the old TES games, that's what I have in my head. I want lists of available mods (downloaders) that is fetched from one or more remote locations that your client can check, and then I want proper unique ids and versioning for downloaders so the client can see 'oh blahbooru updated to v3.1, want to get it?', and then it all updates with no fuss. I regret there is no versioning atm. The current system is easy-ish to expand with new things, but hell to update with fixes. >>4221 >>4222 >>4223 >>4226 >>4229 >>4232 >>4233 >>4236 It is about 1.2 billion tag mappings, using 70 million unique tags and 23 million files. And yeah, the total db investment, which practically has to be SSD, is getting too huge for 'lite' users. That said, all those tags are useful in many situations typical imageboard lads have, and if you want to distribute 1.2 billion tags, I'm really happy with the efficiency and privacy of my implementation. >>4223 The problem with this is it boshes the mappings for files you don't yet have. I could store a 23 million row database table of all the files the PTR has tags for, and then any time you imported a file on that list (hence we needed to add those tags), process those tags. But then I'd need to look up every update regarding that file, which means I'd need a 23 million * n table of file->update_numbers, and we'd also not have the 'all known files' domain for when you enter tags in manage tags. It would ultimately be a much more complicated system than we currently have, and the current system is already difficult enough to wrest under control. Although the current table is gigantic, it is simple. Ultimately, my plan is roughly a mix of different mitigations: - serverside tag filter so we can clear out shit en masse (e.g. purge all filename tags) - clientside tag filter so you can choose just to sync with series/character/creator (etc...) if you want - definition recycling tech so the PTR can age-out really old and duplicate content - improve janitorial workflow and user permissions, see where that takes us (e.g. maybe in several years, only some users are able to commit 100,000 automatically parsed mappings in one go) - once the janitorial side is working better, discuss breaking the PTR up into multiple servers that store specified roles, e.g. 'anime boorus', 'furry content', 'real stuff', 'reddit', 'memes', etc... and then users can choose what they are interested in - some misc stuff like better statistical sampling and viewing of what we have (e.g. 'what percent of the PTR is filename tags?'), so we are making more informed choices In general, the clientside tag filter is the best solution here for users who are more concerned about SSD space. series/character/creator are about 33% of the PTR, unnamespaced is about 33%, and the rest of the namespaces are about 33%. It'll all take a bunch of work, unfortunately. But I'm happy with how efficient the PTR is at processing now. Now I have to work on managing all the stuff we have collected.
>>4225 Yes, that looks great. Now you'd hit 'move files now'. The client may freeze up once or twice while it transfers, but just let it work. Try increasing the 'weight' of your E drive location and you'll see what it does. Should go 67% : 33% and then 75% : 25%. Thank you for the feedback on the terms. This has always been a too-technical dialog, I'll brush it up to be more user-friendly. 'portable' just means 'beneath the db dir', it isn't something most users need to care about. I have plans to rework how a lot of this works, with better rules than 'weight', and supporting background transfer. >>4227 Thank you for the feedback, I'll keep working. >>4231 Thanks, this is a great idea. I don't have a widget yet for this 'quick filter a list', but this would be a good thing to start on. >>4234 It looks like you are missing a python package, which shouldn't be too big a deal to fix. Since you are on Arch, are you using the Arch package here? https://aur.archlinux.org/packages/hydrus/ I don't know much about how this works, beyond it essentially automatically setting you up to 'run from source'. Now I look at that page, I see that 'python-opencv' is in red--could that have maybe been removed from Arch in some way? Make sure you have a backup of your db before messing with any of this! In any case, if you are running from source, your 'solution' by the book would be to run: pip3 install python-opencv But don't do that just yet. If you are running from source yourself, you'll need to activate your virtual environment first. If you are on the Arch package, you'll have to activate that however it does it, which I am not familiar with. If there is a 'venv' directory in your install, you can try prefacing the pip3 line with: . venv/bin/activate If you set up to run from source in the old Arch and never had a venv, then my bet is the new Arch moved up to python 3.8 or something, and your old (system) python environment was lost. In this case, check my newest 'running from source' help. It is a lot easier these days, just set up the venv and then install from the requirements_ubuntu.txt: https://hydrusnetwork.github.io/hydrus/help/running_from_source.html Let me know how you get on. I see twisted is also moaning about 'service_identity', so it sounds like your whole python environment may have been cleared out.
>>4236 You know, thinking about this more, even though it is probably an idle thought, It'd be neat to have a 'tag browser' where a user could preview what kind of tags they'd see with the PTR. The guys who contributed to the PTR have a lot of really obscure tastes too, legit weird shit, so many people who think they wouldn't find their stuff tagged are pleasantly surprised when they do catch up in processing. That said, if a user previously ran batch optimisers or file resizers on their files (so their personal collection has no shared hashes with what's on the boorus etc...), then there usually is a zero hit rate. I guess it would be a website hooked up to a future client api of a client that was fully synced, which is probably just too much work for the value of it. Maybe the server itself could optionally offer that service--upload a file/hash and get back the current tags (although the server isn't really tuned for this kind of lookup, so it might be computationally unreasonable).
Is it possible to configure the downloader page's right click context menu to allow the removal of only the archived pictures that are selected? If not then would it be possible for this feature to be easily implemented? Perhaps it could be designed to only trigger if multiple pictures are selected. My use case is that when running a query I like to determine how many of the pictures in a given range were already archived and add that to the number that I archive when I manually archive/delete filter my way through them, to determine the most recent % of pictures that I'm keeping. So for example I might determine that I kept 20% the first 500 pictures the query returned, but only 10% of the second 500. I run most of my booru queries with the order:quality tag, so this is a common occurrence since on average the better art is the higher its quality rating is. When the percentage I keep drops too low I stop using the query.
>>4240 for me it's not weird fetishes so much as mainly collecting fanart of things I like and often scraping the very bottom of the internet for crumbs. It's not a "I only like fat orc girls shitting" thing, it's a "I only like this game from 20 years ago that was never translated and I even have fanart of it from old dead fansites". Technically the tag browser idea would answer that though, since it would tell me if there's even mappings for the series/characters to begin with. The main solution I'm interested in is just getting character/series/creator tags from ptr since 90% of the time when I search for something it involves just those tags. I'm slowly adding content tags to things I really like, but I also have 9k images without character tags, and countless missing creator tags (I really love how easy Hydrus makes it to preserve source/creator details, and I heavily mourn all the images I lost track of the origin of).
>>4242 >I really love how easy Hydrus makes it to preserve source/creator details Does it? It personally annoys me that there's no equivalent to the wiki that danbooru has.
Devnon,I don't generally like coming in to drop feature requests,but I have recently felt the lack of two (relatively)minor things and thought I might as well throw them out here just in case you happened to wish to feel some dopamine actually crossing things out of your todo list for once since these are both just a specific alternate of something that already exists :^). *Would appreciate a binary/boolean type rating that just exists and responds to system:has/no rating without extra data associated.,You can currently emulate this pretty well by setting a numerical rating with a max of 1 but that is still ends up being clunky in usage.I am currently using the numerical emulation version of this to set things for exportation to other parts of my system(ie wallpaper rotation) *Could we mayhaps get a version of "system:filetype is animation" that only shows things that actually animate,that is,have more than one frame? Hydrus already seems to differantiate them since the single frame gifs and such don't have the play icon on their thumbnails. These are both mostly feel-good features rather than functional ones so feel free to ignore this if you have something else you want to focus on.
>>4244 significantly better than folders for it, since with folders you'd need to sacrifice how easy it is to find shit to better preserve these things tag wiki might be nice but if the artist hasn't deleted their shit I can just follow the source link back to their social media, or otherwise search them out using the name I have for them.
>>4239 >Try increasing the 'weight' of your E drive location and you'll see what it does. Should go 67% : 33% and then 75% : 25% Still not entirely clear as to what this is doing, it's a bit confusing. I want to move my entire hydrus db over to the F drive (to use the E drive for other stuff), currently it's showing a weight of 1 on both, and ideal usage shows 107gb 50% on each drive. Do I have to keep clicking increase location weight on the F drive, until it shows 0% on the E Drive, and the full 214GB on the F drive? Also unrelated, but is it possible to use regex to search? For example to search for "hairy*" and it would conduct a search for the tag "hairy" as well as any tags like "hairy armpits" "hairy pussy" etc. This would be easier then typing in every tag you want. Also what about delimiters, for example typing in "hairy*" then another tag "-*armpits", so it searches for all hairy tags, except for anything with armpits in the tag. Sorry if my regex sucks, I am still garbage at it lmao. But I'm sure you get my point. Of course this could be extended to the client api too.
>>4234 >>4239 Thanks devanon. Resolving the python-opencv dependency fixed the problem. I don't use the package from AUR, I just pull from hydrusnetwork github. I never did setup a venv, I have just been relying on system dependencies. Thank you for your help with this. And thank you for all the work and time you put into the project. I enjoy your software, it's pretty cool. I owe you a beer.
>>4247 >Still not entirely clear as to what this is doing, it's a bit confusing. I think I see, I just had to remove the E drive, then click move and it moved it all over.
>>4237 Wildcard searching in the API is great. Thanks a lot for the addition of this feature!
Is there a way to clear deleted tag records? I made a mistake with some pool and page numbers (can't tell the correct and incorrect ones apart), so I had to delete all those tags, but when I try to re-download them fresh, the correct tags don't get added again.
Is the number of times a picture with a specific tag was archived or deleted stored somewhere? If not would it be reasonable to implement that tracking statistic? Ideally I would love to be able to see the % of pictures with a given tag that I archived instead of deleted and be able to view a list of tags sorted in descending order based on that statistic. This would allow me to easily identify which tags I like the most which would improve the quality of my future searches.
devanon I upgraded to 441 from I think 437 and hydrus no longer launches when on a mapped network drive in windows. Even a fresh unzipped copy fails to launch, the client.exe process starts, takes about 400mb of memory, and spins 1 thread at 100% indefinitely (hours at least). This worked fine previously and version 441 runs fine when located on a local drive in my system. What do?
why does hydrus client not load results dynamically? if I search for a tag, and there's 5k results, it tries to load them all before showing anything.
Is it possible to tell a gallery search to stop after a specific number of pictures and then, after having reached that limit, override the limit and have it continue returning pictures?
>some data was not uploaded unfortunately your account (public user: read only) does not have full permission to upload all your pending content of type (tab siblings, tag parents)! did something change with the ptr? or was I banned from uploading siblings and parents? ._. I can upload tags no issue. siblings and parents also worked about a week ago I last tried.
>>4251 this morning I woke up, and when I click database, it shows an option saying database is complicated. It opens the migrate dialog and says the database is in two different places. Apparently it was 60gb left on the other drive I assume it ended early because I set it to a max of 1hour. I just moved the rest over and it showed 0 left on the E drive, and 100% on the new drive. However, it still shows the database is complicated error, and errors saying the database is spread along multiple places. I'm trying to update my database backup and it won't let me. The database all now should be residing on the F drive, so idk if this is a bug or what?
(18.82 KB 998x406 db.png)
>>4280 forgot text lmao it showed there were still some thumbnails, so i clicked move files now, and it brought it down from 300mb in thumbnails to 169, why tf isn't it just moving it all over to the F drive? the weight is literally n/a and the ideal usage is nothing.
>>4281 ok jeez, i ran it once again, and it finally removed the E line and shows everything is on the f drive. I restarted hydrus, and it still won't show the backup database option, it still shows the database is complicated thing. ._.'
I had a great week. I succeeded in overhauling the client's GUI sessions, greatly reducing the storage and write I/O required for sessions. This particularly benefits clients that have sessions storing many files or URLs. The release should be as normal tomorrow.
>>4247 >>4251 >>4279 >>4280 >>4281 >>4282 I am sorry for the confusion, I did not realise you wanted to move everything over to F. It sounds like you have it sorted now. For the backup, I have made the decision that I cannot write a clever multi-location backup system better than the many third-party options available, so my in-client backup only works on a simple install where everything is tucked under the 'db' directory. Since you are now an advanced user, please check out my help here: https://hydrusnetwork.github.io/hydrus/help/getting_started_installing.html#backing_up FreeFileSync is great, I highly recommend it. As for >>4247 , I am afraid there is no regex search. I may not be able to feasibly provide it on any large scale with current tech, since SQLite does not offer fast text search for regex. Asterisk wildcard (e.g. 'sa*an' giving 'character:samus aran') works in several ways, though. You can alter the complex wildcard rules under tags->manage tag display and search.
>>4241 Ah, this is an interesting and clever thought. At the moment, the remove->archived options on the current right-click menu are a little shallow, but if in future it worked algebraically, I suppose you could nest it into some sort of selected->archived, although I don't know how nice the UI would be to handle that. Unfortunately, before I can think of expanding them, I need to do some background work on select/remove to get them working in the shortcuts system. This may or may not fit in your workflow, but if you select the first 100 files in your page, the status bar at the bottom of the main window should say like '100 images, 65 archived, 35 inbox'. If what you are aiming for is a quick count from a selection, does that maybe do it? >>4242 Thanks for explaining, that's interesting. I'd say hesitantly that you might want to wait for me to eventually get around to these clientside tag filters. You are the use-case I am thinking of, and I'd really like it to be easy for you to say 'yeah, I want the PTR, but only character/series/creator'. I am not sure when I will get to this work, but I really want it done this year. It'll likely be a 'medium-size' job week, or a couple in a row, once I have done the serverside filter and the janitor workflow improvements. >>4245 Thanks. Check out the 'like/dislike' rating services. You can add them in services->manage services just like the numerical, and they are just a single rating bubble you can click on or off (actually, they support like/dislike/off with different clicks, but I never personally use the dislike). I use several myself for favourites of different sorts, and 'read this later', and 'this works as a good reaction image'. Just basically clickable tags. For 'filetype is actually an animation', if it isn't inconvenient, can you try adding 'system:duration' as well? I use 'system:has duration' a lot as a catch-all for all types of video, and if you really want to get finicky, the duration panel also has a 'system:number of frames'. >>4248 Great, thanks for letting me know! I am glad you like my program.
>>4253 Yeah, but it is complicated. Hit tags->migrate tags, which is a power tool for doing big complex jobs. Make sure you read the whole dialog, make a backup before you fire a job off, and then you'll be doing something like: mappings | my tags | deleted/my files/all tags | clear deleted record | my tags But if you know this is just page and pool tags, you might want to change the filter to just 'page:' and 'pool:'. >>4255 Yeah, I want to add a bunch of stuff like this. There's a super-prototype version of this in manage subscriptions, maybe only in advanced mode, that'll fetch the archived : deleted ratio for a particular query to let you know whether the sub query is good or not. This is on my mind, and as I get around to adding tag metadata tech (i.e. metadata about tags), I'd like to show this. This is beyond what you are talking about, but there are potentially similar topics in machine learning. I'd like in 2022 or 2023 to start early experiments in neural networking tech, and I hope that will harvest some interesting info too, like 'images you favourited tend to have this tag'. Once we have this knowledge, there are many ways of phrasing it (e.g. 'show me stuff I am likely to like') to help future workflows. >>4258 Ah, damn. The new build on github must be using one of the versions of PyInstaller that cannot handle network drive executables. I had been refrained from updating my own PyInstaller version for precisely your issue. I can't look into it now, but I'll try to check into this issue and see what the situation is with this old bug, it may be fixed by now. Github was already bitching at me for using PyInstaller 3.5 in my requirements.txt. Please roll back to 437 for now, or see if you can figure out a temporary local install. I am sure you know, but just in case you don't, you can point to a db somewhere else with the '-d' launch parameter: https://hydrusnetwork.github.io/hydrus/help/launch_arguments.html >>4260 I experimented with loading results in in batches some time ago, and ultimately (at that time), there was much more CPU involved in successively updating the 'selection tags' list (and various other aggregate stores of the current page) with new batches than there was in just computing it once when everything was loaded. It also makes various things more complicated. Note that unlike a booru, which is tuned for paginated fetching, I can't deliver any results at all in the first phase of search. When you see in the status bar it thinks a bit, and then it starts building up results in batches of 256, that first bit is where it actually does the search proper. So even if I loaded dynamically, the only time I would be saving would be the time it was fetching 256 at a time, and then there would be the additional merge CPU cost. I may revisit this, but overall I've been putting my time into improving the speed at searching and result loading works behind the scenes and generally advising people to use system:limit as much as they reasonably can anyway. >>4267 Sort of, although not in a pretty way. If you set a 'file limit' on a gallery, it'll stop searching at that point. Then, when you revisit it, if you bump up the file limit and then click on the 'gallery log' button, and then right-click the last fetch, you can say something like 're-do this page and keep searching'. That'll then keep going until it hits the new limit. When I do what you are talking about, I tend to babysit my queries, usually in batches. I'll let a couple of gallery pages fetch for a creator and then pause the gallery search. When I come back half an hour later, the 84 files or whatever will have downloaded and I can see if that preview seemed good, and then delete the query or let the gallery search continue depending on what I think.
>>4277 We've been slowly working to a new account format for the PTR, with a public account that can read, and multiple accounts created by users that can write. I started talking about it here: >>3951 but then we had some technical problems that I had to figure out. Please forgive the roughness--this is the first time a lot of this code and UI has really been used on any scale since hydrus started, and I still have a ton of janitor workflow to improve and some sibling/parent maintenance code to catch up on. There's some longer additional convo about privacy implications starting here >>3953 . tl;dr: I think we're still good overall. Did the error message you got talk about maybe making a new account under services->manage services? That's going to be the way we're generally going, although I need to work on workflow and improve the help as we figure out what works. In any case, to fix your actual problem, please hit services->manage services, then find your PTR entry, and in that click 'check for automatic account creation'. It should let you generate your own account that can upload parents and siblings. When the janitors look at the stuff you upload, it will be just that (before, with the public key, all the good stuff and the shit was mixed together because it was just one account, and it was increasingly difficult to sort through).
(15.75 KB 965x406 db.png)
>>4293 >For the backup, I have made the decision that I cannot write a clever multi-location backup system better than the many third-party options available I get that, but if I am moving EVERYTHING over to the new drive, I don't have any multi-locations. So I fail to understand why it's giving me this issue.
Are there any plans for a "group by" option in media list? Similar to the current collect option but still keeping the results expanded rather than collapsed, and then doing a line break and some padding between grouped rows. Would be great for organizing broad searches like a certain series, then grouping by artist or character. Thanks as always for your work on this, amazing project.
>>4297 The database counts as a THING. Put your files inside the db folder
Is there a way to limit the simultaneous connections per domain? Sometimes I want to download a bunch of tags from various sites, but I know that with sankaku I can only have ~2 queries going at the same time. Currently I either have to manually pause and unpause or just leave it and have a bunch of galleries stuck because sankaku is hogging all the slots.
>>4300 I don't understand what you are trying to tell me, all the files are in the db folder. It says 214gb is media.
>>4302 Have a read of this to learn more about the structure of hydrus's database: https://hydrusnetwork.github.io/hydrus/help/database_migration.html As well as your files, there are also some .db files where all your settings and subscriptions and file lists and tags etc.... are stored. On that migrate dialog, see where it says 'database : E:\...' up top.
https://www.youtube.com/watch?v=bpEFn3MFyfA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v442/Hydrus.Network.442.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v442/Hydrus.Network.442.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v442/Hydrus.Network.442.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v442/Hydrus.Network.442.-.Linux.-.Executable.tar.gz I had a great week. An important part of GUI Sessions is overhauled, which should save a lot of hard drive time for larger clients. gui sessions I always encourage a backup before you update, but this week it matters more than normal. If you have a client with large sessions with many important things set up, make sure you have a backup done before you update! I feel good about the code, and I try to save data on various failures, but if your situation gives errors for an unforeseen reason, having the backup ready reduces headaches all around! Like the subscriptions and network objects breakups I've done in the past year, I 'broke up' the monolithic GUI Session object this week. Now, when your session has changes, only those pages that have changed will be saved, saving a ton of CPU and HDD write I/O. Furthermore, sessions that share duplicate pages (this happens all the time with session backups), can now share that stored page, saving a bunch of hard drive space too. Like with subscriptions, some users are pushing multiple gigabytes of session storage total, so there is a good amount of work to save here. You don't have to do anything here. Everything works the same on the front end, and all your existing sessions will be converted on update. Your client should be a little less laggy at times, and client shutdown should be a bit faster. If any of your old sessions fail to load or convert, a backup will be made so we can check it out later. Let me know if you have any trouble! Advanced stuff: Another benefit is the old limit of 'sessions fail to save at about 500k session weight' now applies to pages individually. Please don't immediately try to nuke your sessions with five million new things, but if you do end up with a big session, let me know how other performance works out for you. Now this bottleneck is gone, we'll start hitting new ones. I believe the next biggest vulnerability is thread starvation with many simultaneous downloaders, so again please don't paste-spam a hundred now queries (for now). If you have been tracking session weight (under the pages menu), I am rebalancing the weights. Before, the weight was file = 1, URL = 1, but after all our research into this, I am setting it to file = 1, URL = 20. In general, I think a page will fail to save at the new weight of about 10 million. If you are in advanced mode, you can now see each page's weight on page tab right-clicks. Let's get a new feeling for IRL distribution here, and we can aim for the next optimisation (I suspect it'll eventually be a downloader-page breakup, storing every query or watcher as a separate object). Since URLs seem to be the real killer, too, see if you can spread bigger downloads across multiple download pages and try to clear out larger completed queries when you can. the rest I did a bunch of little stuff--check the changelog if you are interested. I have also turned off the interval VACUUM maintenance and hidden the manual task for now. This was proving less and less useful in these days of huge database files, so I will bring it back in future on a per-file basis with some UI and more specific database metadata. EDIT: Thanks to a user submission, yande.re post parser is updated to pull tags correctly if you are logged in. I hoped my update code would move the link over from the old parser correct, but it did not. I'll fix this for next week, but if you download from yande.re while logged in, please hit network->downloader components->manage url class links and move 'yande.re file page' from moebooru to 'yande.re post page parser'. We fixed a couple more problems with the new builds--the Linux and Windows extract builds have their surplus 'ubuntu'/'windows' directories removed, and the Linux executables should have correct permissions again. Sorry for the trouble! And after some tests, we removed the .py files and the source from the builds. I long-believed it was possible to run the program from source beside the executables, but it seems I was mistaken. Unless you are running the build-adjacent source pretty much on the same machine you built on (as my tests years ago were), you get dll conflicts all over the place. If you want to run from source, just extract the source proper in its own fresh directory. I've also fleshed out the 'running from source' help beyond setting up the environment to talk more about the actual downloading and running of the program. I'll continue work here and hope to roll out some easy one-and-done setup scripts to automate the whole thing.
full list - gui sessions: - gui sessions are no longer a monolithic object! now, each page is stored in the database separately, and when a session saves, only those pages that have had changes since the last save are written to db. this will massively reduce long-term HDD writes for clients with large sessions and generally reduce lag during session save intervals - the new gui sessions are resilient against database damage--if a page fails to load, or is missing from the new store, its information will be recorded and saved, but the rest of the session will load - the new page storage can now be shared across sessions. multiple backups of a session that use the same page now point to the same record, which massively reduces the size of client.db for large-sessioned clients - your existing sessions and their backups will obviously be converted to the new system on update. if any fail to load or convert, a backup of the original object will be written to your database directory. the conversion shouldn't take more than a minute or two - the old max-object limit at which a session would fail to save was around 10M files and/or 500k urls total. it equated to a saved object of larger than 1Gb, which hit an internal SQLite limit. sessions overall now have no storage limit, but individual pages now inherit the old limit. Please do not hurry to try to test this out with giganto pages. if you want to make do a heap of large long-term downloaders, please spread the job across several pages - it seems URLs were the real killer here, so I am rebalancing it so URLs now count for 20 weight each. the weight limit at which point a _page_ will now fail to save, and the client will start generally moaning at you for the whole session (which can be turned off in the options), is therefore raised to 10M. most of the checks are still session-wide for now, but I will do more work here in future - if you are in advanced mode, then each page now gives its weight (including combined weight for 'page of pages') from its tab right-click menu. with the new URL weight, let's get a new sense of where the memory is actually hanging around IRL - the page and session objects are now more healthily plugged into my serialisation system, so it should be much easier to update them in future (e.g. adding memory for tag sort or current file selection) - . - the rest: - when subscriptions die, the little reporting popup now includes the death file velocity ('it found fewer than 1 files in the last 90 days' etc...) - the client no longer does vacuums automatically in idle time, and the soft/full maintenance action is removed. as average database size has grown, this old maintenance function has increasingly proved more trouble than it is worth. it will return in future as a per-file thing, with better information to the user on past vacuums and empty pages and estimates on duration to completion, and perhaps some database interrupt tech so it can be cancelled. if you really want to do a vacuum for now, do it outside the program through a SQLite intepreter on the files separately - thanks to a user submission, a yande.re post parser is added that should grab tags correct if you are logged in. the existing moebooru post parser default has its yande.re example url removed, so the url_class-parser link should move over on update - for file repositories, the client will not try to sync thumbnails until the repository store counts as 'caught up' (on a busy repo, it was trying to pull thumbs that had been deleted 'in the future'). furthermore, a 404 error due a thumb being pulled out of sync will no longer print a load of error info to the log. more work will be needed here in future - I fixed another stupid IPFS pin-commit bug, sorry for the trouble! (issue #894) - some maintenance-triggered file delete actions are now better about saving a good attached file delition reason - when the file maintenance manager does a popup with a lot of thumbnail or file integrity checks, the 'num thumbs regenned/files missing or invalid' number is now preserved through the batches of 256 jobs - thoroughly tested and brushed up the 'check for missing/invalid files' maintenance code, particularly in relation to its automatic triggering after a repository processing problem, but I still could not figure out specifically why it is not working for some users. we will have to investigate and try some more things - fixed a typo in client api help regarding the 'service_names_to_statuses_to_display_tags' variable name (I had 'displayed' before, which is incorrect) - . - build fixes: - fixed the new Linux and Windows extract builds being tucked into a little 'ubuntu'/'windows' subfolder, sorry for the trouble! They should both now have the same (note Caps) 'Hydrus Network' as their first directory - fixed the new Linux build having borked permissions on the executables, sorry for the trouble! - since I fixed the urllib3 problem we had with serialised sessions and Retry objects, I removed it from the requirements.txts. now 'requests' can pull what it likes - after testing it with the new build, it looks like I was mistaken years ago that anyone could run hydrus from source when inside a 'built' release (due to dll conflicts in CWD vs your python install). maybe this is now only true in py3 where dll loading is a little different, but it was likely always true and my old tests only ever worked because I was in the same/so-similar environment so the dlls were not conflicting. in any case the builds no longer include the .py/.pyw files and the 'hydrus' source folder, since it just doesn't seem to work. if you want to run from source, grab the actual source release in a fresh, non-conflicting directory. I've updated the help regarding this, sorry for any trouble or confusion you have ever run into here - updated the running from source document to talk more about actually getting the source and fleshed out the info about running the scripts - . - misc boring refactoring and db updates: - created a new 'pages' gui module and moved Pages, Thumbs, Sort/Collect widgets, Management panel, and the new split Session code into it - wrote new container objects for sessions, notebook pages, and media pages, and wrote a new hash-based data object for a media page's management info and file list - added a table to the database for storing serialised objects by their hash, and updated the load/save code to work with the new session objects and manage shared page data in the hashed storage - a new maintenance routine checks which hashed serialisables are still needed by master containers and deletes the orphans. it can be manually fired from the _database->maintenance_ menu. this routine otherwise runs just after boot and then every 24 hours or every 512MB of new hashed serialisables added, whichever comes first - management controllers now discard the random per-session 'page key' from their serialised key lookup, meaning they serialise the same across sessions (making the above hash-page stuff work better!) - improved a bunch of access and error code around serialised object load/save - improved a heap of session code all over - improved serialised object hashing code next week I have one more week of work before my vacation. There's a ton of little jobs I have been putting off--checking new downloaders users sent in, some more help docs to work on, and magically growing multi-column list dialogs--as well as emails and other messages I haven't got to. I'll try to tidy up those loose ends as best I can before I take my break. I'll also deal with any problems with these new GUI Sessions.
>>4295 Brilliant. I was able to bump up the file limit on gallery searches using your method, with a little modification. For anyone curious, before going into the gallery log and selecting "try again" for the query, you need to change the file limit and click the "set options to queries" button. Then after trying the query again via the gallery log just unpause the search then the files and it will work. This is very helpful for my workflow. Now I don't need to worry about leaving queries running for too long and getting too many pictures. I'm also the guy who asked about removal operations working on only the selected pictures. Somehow I hadn't noticed those stats on the bottom of the screen, that's helpful as it effectively allows me to double the number of pictures I can search at once. I can search say 200 pics, and then determine the statistics for the first 100 and second 100 separately using that info. I've already been enjoying the archived : deleted ratio in manage subscriptions, very much appreciated! I'm excited to see how neural networks eventually improve your program (and boorus in general) in the future. I have little doubt that I'll still be using hydrus network in a decade so I can wait as long as it takes.
>>4303 I already looked at that, I see what you are saying though. I am still confused on how I'm supposed to go about doing it though. How exactly do I move that db folder over to where I migrated the rest of the files (F/Docs/HydrusNetwork)? I see the option to "move entire database and all portable paths", (I guess I should have used this originally) but it asks for a location to migrate, and I can't choose the one I already migrated everything too (F/Docs/HydrusNetwork) because it has the already migrated files in it, and it tells me to select an empty folder. I'm assuming forcing it will fuck up shit so I'm not going to.
>>4309 Yeah, since you are already broken up, I think I'll recommend that you just do it outside the client: - shut the client down - make sure you have a full backup of everything - move install dir with its db to a new location on F - boot the client. it'll most likely complain that it can't find your files and give you a dialog to remap the locations. set them up so you are good - go back into migrated db dialog and make sure all the paths are where they seems like they should be. if you want, move the media files back inside the db dir to make them 'portable' again, the move should be quick this time since it is inside the same partition - you should be good. make a new shortcut if you need to to the new install dir Let me know if you run into any more trouble.
(93.51 KB 906x719 Screenshot_20210602_221233.png)
>>4304 A minor hiccup. After decompressing "Hydrus.Network.442.-.Linux.-.Executable.tar.gz", the folder's name changed the "N' of network to Uppercase. Not a deal in Windows but it may cause trouble to unskilled Linux users.
>>4310 For clarity in this post, F/HydrusNetwork is the new directory that I migrated, and E/HydrusNetwork is the old directory I'm trying to move to F/. >move install dir with its db to a new location on F the install dir is the "Hydrus Network"/ folder that contains all the other folders and files such as bin/, db/, hydrus/, static/, certifi/, cv2/, client.exe, etc. correct? Also the new Hydrus Directory currently is empty, except for a bunch of folders (with the media) with the arbitrary names such as f0, f8, a0, c0, etc. Do I just copy all the folders and files out of E/HydrusNetwork as mentioned above, then drop them into the new F/HydrusNetwork directory root alongside all those files with arbitrary names? >it'll most likely complain that it can't find your files and give you a dialog to remap the locations will I give it the directory of the F/HydrusNetwork just for clarity? Just wanted to ensure I had the correct folder placements. Thanks.
(8.52 KB 178x72 asdfgerg.jpg)
I know it's experimental but "replace all underscores with spaces" option is kind of useless if you're not gonna merge the two tags
>>4314 pretty sure that option is simply a gui one rather than a database operation mate,the assumption is that you would have them all with underscores,I reason. Set up tag siblings if you are grabbing from a source that keeps fucking it up.
>>4315 The problem is the PTR is full of them, too many to sibling I don't use underscores on my local tags
>>4314 >>4316 >merge the two tags >ptr I originally suggested this when suggesting the underscores to spaces toggle, I completely agree that is an issue. I am also sick of needing to add siblings with underscores and spaces for the same tag. This will massively clean up the db.
an idea would be to have collections, and also be able to view files with text over them. Collections would be cool to group together files that are related or part of a series of similar files (say a photoshoot). The ability to view and add text over files would be cool with things like JOIP (jerk off instruction pictures). It could go an added step to allow downloaders to automatically fetch text added alongside images on sites like imgur, twitter, etc.
I know you're probably not testing for this dumb edge case, but I had an issue upgrading to 442 with large sessions (20 of them to convert) _and_ being short-ish on disk space (something like less than 10G available). It crashed while converting the sessions because it ran out of available disk space, and I got back to the "welcome to Hydrus" screen on next open, with everything empty (not even files available), config back to what seemed 0. Fortunately I always backup right before upgrading, so I lost nothing. This also means that I have no logs for you as my rsync erased the log file back to before the issue, sorry; I should have thought about it before restoring.
>The query something for the subscription something hit its periodic file limit without seeing any already-seen files. What does this mean? Is it bad?
>>4326 The subscription normally stops when it sees the the newest thing from the last time it checked so that it doesn't grab things multiple times for no reason. There is also an option(on by default) while setting up subscriptions to stop after a certain number of files are grabbed at once,since they are intended to continiously get new stuff rather than grab everything that gallery has,for which the actual gallery downloader is better equipped. That message is telling you that that specific subscription hit this limit (and thus stopped checking) before it saw last time's stopping point,this is intended behavior,so nothing bad has actually happened,It's just that you might have missed some files within that gallery.In which case you might want to run the gallery downloader on that link if getting all the files is important,which is why it's telling you.
>>4327 Thanks, that's fine then.
Haven't followed the threads much,but had a Feature idea - built in image converter/compressor. How big of a hassle would it be to implement that?
>>4310 I'm still trying to figure this out, I think I fucked up my dirs. I was moving stuff, and it never complained it couldn't find any dirs because I had originally moved the files already to that folder on F so it already knew the mapping. I then tried just clicking migrate entire db and all portable paths, and assumed it'd take EVERYTHING including my media, and move it over to whatever dir I choose. I guess not, because it only moved like 4 .db files to the new folder. When I went back into Hydrus, it reset all my settings and everything. It still had my media folder mapped though, but like my theme was reset and everything. So I'm very confused. Now I'm just saying fuck it, and I'm trying to move everything to a new folder, and move the media files folder into the Hydrus Network/db/client_files folder which is where I think they belong? Idk anymore. But this process is really tedious, there should be a simple button for users who literally want to move their install, db, and media files to a new directory. Just click, it moves everything automatically. Also the docs aren't very helpful for this either. I'd highly suggest considering streamlining this process, and rephrasing some of the terms in the dialog, and going over this page on the docs.
(17.68 KB 932x406 dbn.png)
>>4310 Ok so I ended up needing to restore from the backups, I placed the db and install folder I had on the E drive into a new location on the F drive, then I placed all the media folders that I had migrated within hydrus in the F drive with all my media files in it into the db/client_files. Upon starting hydrus it asked and I set the new location. However, it's showing there are two locations still, when it should only show one.It also shows one is portable and one isn't. Although I now have the option to update my database backup, so the option seems semi-fixed. I just need to figure out how to merge those two mappings into one, seeing they all are on the same folder.
>>4298 Yes, I'd like precisely that--some way of having multiple 'blocks' of thumbnails that I can separate by arbitrary boundaries, like filetype or ranges of filesize or as you say maybe play with tags too. At the moment, there are three limitations to sort/collect/group: - I am still slowly converting my sort/collect stuff to newer code. I have been able to improve this in recent weeks though, so I am feeling better about tacking on new options and tag contexts. - My thumbnail code is probably the oldest thing in the program and it is horrible. The foundation is all duct tape and bad ideas, and I need to completely overhaul all of it. It has two problems: -- It can't deal with a file appearing more than once in its media structure, which stops things like collections-by-single-tag, where you might collect by every separate 'creator:' tag, and then a file that has two creator tags could appear in two collections on the same page. -- It can't deal with anything more complicated than one list. So we can't do groups easily. Extending the data structure behind to do groups won't be super difficult, but my thumbnail drawing code is all custom and needs a similar overhaul. So, basically I need to do to thumbnails what I recently did to taglists to allow flexible tree-like sibling and parent display. It'll just take scheduling, once I am doing updating sort and collect objects and exposing all the logic here (like secondary sort) in UI. >>4301 Not yet, there's just the global value in options->connection. My plan here is to write 'domain manager' UI, which will be the one-stop location to review which domains have had recent errors, what proxies to use for which domain, maybe moving domain bandwidth options and downloader objects in there, and then additional per-domain stuff like num simultaneous connections. >>4311 Thank you. I always thought the Linux extract was the same capital letters as the Windows one. I've decided to harmonise, so please forgive any confusion in this transition, I hope this to be a one-time switch, and now we are going to stick with "Hydrus Network" going forward. ( >>4202 )
>>4313 Yep: Install = The 'Hydrus Network' folder with the executables, dll files, and folders like 'cv2' and 'static'. DB = The four client*.db files, which are default under install_dir/db. Media = The 256 'fxx' folders and 256 'txx' folders, which are default under install_dir/db/client_files. Since you are moving all to the same drive, if you have not already sorted it, I think you should do this: Move "Hydrus Network" ('install dir') to "F:\Hydrus Network" or whatever If the four db files weren't already in "F:\Hydrus Network\db", put them there. Put the 512 fxx and txx media folders in a different location on F, let's say "F:\hydmedia" (I'd say don't put the media folders inside \db\client_files in this step, just for some technical reasons related to boot-heal) Then run client.exe. It'll likely throw up the 'hold up, where the jpegs?' dialog on boot, asking for new media folder locations. Correct it by just giving it "F:\hydmedia", and it'll fill in all the gaps. It will then finish boot and load. Click on some search pages to make sure your thumbs and files are indeed all loading correct. Then hit up migrate database again, which will probably (due to the boot-heal), have some weird ideas of where files 'should' be. Just add "F:\Hydrus Network\db\client_files" as a new location, and 'remove' any other locations, and then hit up 'move files now', which should be fast across the same disk partition. It'll move all your files and thumbs 'back' to \db\client_files. Once you are done, you have a fully 'portable' install on F:\, all with 'default' locations. You should get the 'backup' command available in the database menu again.
>>4314 >>4315 >>4316 >>4317 Yeah I agree. It needs a hardcoded rule in the siblings system, but I'll have to write efficient mass-sibling logic first to get it to work in non-insane time, which will be a big step. Once that system is available though, we should also have 'rename all artist: tags to creator:' tech! >>4318 Yeah, do you mean like the translation text boxes some boorus support? I have it on my whiteboard to finally figure out 'note parsing' in the downloader system one of these 'medium size job' weeks (which I do once every four weeks). When we have that tech, I know some of the parser lads are going to try parsing that data in some standard json format or whatever, and then I can think and play around with actually displaying it. For 'collections' as I think you are talking about, please wait for later in the year, when I fully expect to flesh out the current stub 'file alternates' system. I want arbitrary file connections (e.g. 'this file is a costume change of this file', 'this file is the subsequent file in a photoshoot series to this file', and then proper thumbnail-bundling and media-viewer-browsing tech to employ that metadata. >>4323 Ah shit, I am sorry! It may have backed up a bunch of your old sessions to JSON in your 'db' directory, although I think in this special case since it was short disk space, that may even have failed. And your rsync would have wiped them, so not applicable. I am worried about your being reset to 'welcome to hydrus' like a first boot. That suggests your db files were actually wiped or truncated to zero bytes. I would obviously never do this intentionally, I have no code that deletes any db file. I think this must have been an unusual file truncation back to 0 bytes caused by the emergency situation of hot database files and no disk space. Well done for having the backup. I do the exact same as you, backup right before I update. Then you don't have to worry about anything. I have some 'check free disk space' tech in my old vacuum code. I will make a weekly job to see if I should do a pre-update check for certain amount of free space, and dump out of the attempt if it is too small. Sorry again for the trouble, and thank you for the report.
>>4326 >>4327 >>4328 Yeah, it means one of: - Someone went on an upload spree and spammed 150 pictures to that query in three days. - The site changed URL format, so the subscription thinks it is seeing a load of new stuff, when really it is the same content but with different URLs. The limit is to catch the latter situation, and the popup message is to let you check for the former. I'll update the text to be more human friendly. The remedy here atm is to run that query again in a downloader page to catch the gap you missed. I'd like to have a button that sets this up for you automatically, that would be most human friendly I think.
>>4335 It is a complicated subject. Briefly: Changing file content is a headache for various hash-based technical reasons that matter whenever you grab or communicate about files with other computers (e.g. boorus, the PTR). Compressing or converting files tends to not save all that much space overall. Normally 5-10%, and less for video, and can produce multiple conversations on what is the 'best' way to do it. (JPEGXL may be a silver bullet here, with ~50% lossless savings) I don't want to hardcode this stuff. So, what I plan to do is write an 'external executables' system for hydrus where you can set a variety of different pipeline algebras, for instance: waifu2x upscale | exe path | takes an image file using these launch parameters | produces a converted duplicate image file in this temp location ffmpeg to webm high quality | exe path | takes a video file using these launch parameters | produces a converted duplicate video in this temp location youtube-dl | exe path | takes a url using these launch parameters | produces a video for that URL in this temp location etc... Then, with that input/output algebra, I can plug those pipes into hydrus (e.g. right-click on image->convert using 'waifu2x upscale') and let users define and share all sorts of conversion and file/URL generation scenarios. Youtube-dl and Gallery-dl would obviously be great to plug in to the downloader system. Automated conversion to JPEGXL or whatever else you want would be a schedulable mass job I could pack into the existing file maintenance system, while preserving appropriate metadata across the dupes. This is a little way off, but that's my plan.
>>4352 >>4354 Thank you for the updates, I am sorry it has given you trouble. I agree that this technical dialog and the ugly help need a complete overhaul. I hope to make this all more user friendly in future. I think I should add some more red warning text to the top of this dialog in the meantime. Since you are several steps in, you can ignore my post I made earlier at >>4365 In your image in >>4354, I assume you want those media files to stay in "F:\Documents\Hydrus Network\Hydrus Network\db\client_files"? Is that the full path of that second location? It looks like hydrus has a memory (likely confused from an automatic healing routine) of them being in the "F:\Documents\Hydrus Network" location, and it thinks it should be moving them over there, so let's fix it: Click the good location with 0 weight. Click 'increase location weight' once Click the bad location with 'nothing' current usage Click 'remove location' The dialog should now have the one location, still 'portable' since it is inside your db directory, with current usage at 100% and ideal usage at 100%. Let me know if you still have any trouble.
Does that "related tags" suggestion feature in the "manage tags" area consider what tags are NOT often seen with each other? I see the related section suggestion a lot of tags that don't really make much sense together, and sometimes suggest tags that are mutually exclusive, like "solo" and a post that's already tagged "duo" for example. It would be much more useful if the statistical algorithm used here would consider tags NOT being seen often together in its suggestions as well.
(131.01 KB 1280x720 yay.jpg)
>>4366 >I want arbitrary file connections (e.g. 'this file is a costume change of this file', 'this file is the subsequent file in a photoshoot series to this file', and then proper thumbnail-bundling and media-viewer-browsing tech to employ that metadata.
Have there been any progress with ICC color profile management? I was searching for this issue, and found some posts from 2019, but after that I see nothing.
It would be cool if there was an option to have automatic search result fetching as you type, but disabling autocompletion, and instead having that be bound to a shortcut. For me, fetching exact results (ideal siblings, for example) is quick, but autocompletion is slow. I wish the was an option to enable the former, but not the latter, and have that be activated by a shortcut, like how you can do that fetching results using ctrl+space right now. This is probably easy to implement, and it would make adding tags to a bunch of files one after the other a lot easier for me, since I wouldn't have to deal with Hydrus constantly freezing for 30 seconds, but I would still automatic adding of an ideal sibling upon entering an un-ideal sibling, from the "exact" result fetching, and the shortcut would mean that I could still have actual autocompletion when I need it.
>>4172 >There is probably some proxy daemon you can run on a system that'll pipe certain requests to the TOR service that'd solve this better, given how jury rigged other solutions sound. There is already a Tor daemon that allows a SOCKS5 proxy, and I'm fairly certain you can route it through the Tor Browser's connection as well. I think the main problem is that when Hydrus makes a request, it looks a lot different from what the Tor Browser does to the server. I may be wrong, but in my understanding Hydrus just scrapes the HTML/API of the site, grabs the tags and image URL, and then downloads the image. Meanwhile, the Tor Browser would render the site in just about the same way for anyone using it, making it much more anonymous. If you use a Tor proxy in Hydrus, only your true IP is hidden from the server and your ISP cannot tell what server you connected to. They can still likely guess you're using Hydrus from HTTP headers or user agent and whatnot, along with selectively not downloading things that regular browsers do. This is better than nothing in my opinion but not nearly as anonymous as using the Tor Browser. Setting up some browser add-on to download through the browser and pipe to Hydrus would only really work for whatever image was on the page at the time. I'm not sure how subscriptions or download pages would work when called from within Hydrus without causing Hydrus dev a lot of work.
Reminder to check your backups. I had to restore from backup, and after a day of fucking around with various backups with corrupted databases the most recent working one I had was a month old.
>>4398 how do we check our backups to ensure they're okay? without restoring from it I mean. Also if there's not already, maybe it would be a good idea to implement a feature that compares the current db with the backup db to ensure it isn't corrupted and fully functional.
>>4401 It depends on how you back up. I've been making encrypted archives once a month instead of the built-in "copy everything to X folder" button, and the one I made on June 1st was corrupted. A separate daily automatic backup for some reason ignored my Hydrus folder most of the time and corrupted the databases when it didn't. I'll probably change to making a backup of the encrypted version of my Hydrus directory instead of an encrypted archive of the unencrypted data.
I had a great week working on small quality of life issues. A couple of bugs are fixed, some UI lag is reduced, and I worked on some layout too. Just a mix of cleanup before my vacation next week. I have some unavoidable IRL tomorrow, so the release may be a bit later than usual.
>>4398 >It depends on how you back up built in backup manager >>4406 >I had a great week working on small quality of life issues thanks devanon. >I have some unavoidable IRL tomorrow, so the release may be a bit later than usual. man live your life, no need to be online 24/7, unplug for a day or two. It's good for your health and feels good. we appreciate your work. just preferably let us know before you ever plan on abandoning the project :p thanks for your work!
(1.27 MB 3571x2678 aee7tg48.png)
>>4410 >just preferably let us know before you ever plan on abandoning the project :p Don't give him ideas.
(28.84 KB 658x901 photo_2021-06-09_14-33-52.jpg)
Is there a downloader for Endchan? I don't see it. So I can add an endchan catalog to my subscriptions.
>>4406 Could you consider adding a proper log/info area to see all errors from all the various things that are happening in hydrus? Like ti see subscription success and errors, watcher success and errors, any db issues, etc. Also does the backup database do any verification or anything? And can we get an option to run a manual verification on the backup to ensure it's proper even if it does? Relevant to >>4401 Thanks
>>4414 >Could you consider adding a proper log/info area to see all errors from all the various things that are happening in hydrus? Have you looked at db/client - <year>-<month>.log ? Because I think that's exactly what you're describing. Which reminds me – devanon, could you consider adding 0 padding to the month (stored in current_month, set with a tm_mon) for the log's filenames, please? My OCD gets sad when it sees 12 before 2. Thank you!
>>4416 I meant an internal viewer, in which the user could also easily filter/sort by type of error (for example, backup errors, PTR errors, etc.) Or sort/filter then export only those filtered lines to a temporary file to view it in whatever editor the user pleases.
>>4406 I'm that anon that was having the database migration troubles (E drive to F drive). All seemed to be working good, however currently I am trying to update to a newer version of Hydrus on Windows. >Update Issue/My Process Now, I used the installer as I have always done in the past, and I clicked install, and I clicked through and then it finished. I opened the client, and it was a fresh install of Hydrus. I went back into the installer, and think it tried to install it under the E drive and I didn't notice. So, I tried clicking browse in an attempt to change the install directory to the already installed one on F drive. I navigated to the install folder on the F drive, clicked it, and it prompted me with a message stating that "The folder: F:\Docments\Hydrus Network\Hydrus Network" already exists. Would you like to install to that folder anyways?; Yes; No". I selected no for now, and just canceled the install on that dir.. because I am unsure if this is going to overwrite that directory, or it's just stating it will update that directory (I would suggest clarifying this in the installer btw). >New Folder Also upon doing that first install, it created a Hydrus Folder in the E drive, I'm assuming this is safe to delete. It also changed my system shortcut (windows search) for hydrus to that E drive instead of the one on my F drive. If I manually go into my F drive at F:\Documents\Hydrus Network\Hydrus Network\client.exe and launch it, my original database is there and looks fine. So it seems like it created a new client and database on that E drive where Hydrus used to reside before I moved it to that F drive. >So I'm not sure what to do..? I just want to update my Hydrus on the F drive, and in the end the ONLY Hydrus install (and ALL the folders for Hydrus) that should reside on this system in that "F:\Documents\Hydrus Network\" folder.
A cool idea: fetch media from jellyfin and nextcloud and just parse the tags (don't download the files) So users can import their porn libraries but without having double of all their files (it basically streams right from Jellyfin through Hydrus). Hydrus would just store the url, and any file information such as the hash of the file and what not, then the user could apply any tags to it. It just wouldn't download the file.
https://www.youtube.com/watch?v=NgYIIPszZjA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v443/Hydrus.Network.443.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v443/Hydrus.Network.443.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v443-macos/Hydrus.Network.443-macos.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v443/Hydrus.Network.443.-.Linux.-.Executable.tar.gz I had a great week doing nice cleanup and quality of life work. Hey, we had a problem getting the macOS release to build this week. The macOS link above goes to a build using a simpler and faster method. It should work fine, but please let me know if you have any trouble. As always, back up before you update! highlights Popup messages can now launch complex jobs from a button. The first I've added is when a subscription hits its 'periodic' file limit. The situation itself is now better explained, and a button on the popup will create a new downloader page with the specific query set up with an appropriate file limit to fill in the gap. The second is if you try to upload some content to a repository that your account does not have permission for (this is affecting sibling- and parent-uploading PTR users as the shared public account is changing), the popup message that talks about the issue now has a button that takes you straight to the manage services panel for the service and starts automatic account creation. Subs should now be more careful about determining when they have 'caught up' to a previous sync. Small initial file limits are respected more, and the 'caught up' check is now more precise with sites that can give multiple files per URL or very large gallery pages. I gave options->speed and memory a full pass. The layout is less crushed and has more explanation, the options all apply without needing a client restart, and the new, previously hardcoded cache/prefetch thresholds are now exposed and explained. There's a neat thing that gives an example resolution of what will be cached or prefetched, like 'about a 7,245x4,075 image', that changes as you fiddle with the controls. The client has recently had worse UI lag. After working with some users, the biggest problems seemed to come in a session with lots of downloaders. I traced the cause of the lag and believe I have eliminated it. If you have had lag recently, a second or two every now and then, please let me know how things are now. If you use the Client API a lot while the client is minimised, you can now have it explicitly prohibit 'idle mode' while it is working under options->maintenance and processing. full list - quality of life: - when subscriptions hit their 'periodic file limit', which has always been an overly technical term, the popup message now explains the situation in better language. it also now provides a button to automatically fill in the gap via a new gallery downloader page called 'subscription gap downloaders' that gets the query with a file limit five times the size of the sub's periodic download limit - I rewrote the logic behind the 'small initial sync, larger periodic sync' detection in subscription sync, improving url counting and reliability through the third, fourth, fifth etc... sync, and then generalised the test to also work without fixed file limits and for large-gallery sites like pixiv, and any site that has URLs that often produce multiple files per URL. essentially, subs now have a nice test for appropriate times to stop url-adding part way through a page (typically, a sub will otherwise always add everything up to the end of a page, in order to catch late-tagged files that have appeared out of order, but if this is done too eagerly, some types of subs perform inefficiently) - this matters for PTR accounts: if your repository account does not have permissions to upload something you have pending, the popup message talking about this now hangs around for longer (120 seconds), explains the issue better, and has a button that will take you directly to the _manage services_ panel for the service and will hit up 'check for auto-account creation' - in _manage services_, whenever you change the credentials (host, port, or access key) on a restricted service, that service now resets its account to unknown and flags for a swift account re-fetch. this should solve some annoying 'sorry, please hit refresh account in _review services_ to fix that manually' problems - a new option in maintenance and processing allows you to disable idle mode if the client api has had a request in the past x minutes. it defaults disabled - an important improvement to the main JobScheduler object, which farms out a variety of small fast jobs, now massively reduces Add-Job latency when the queue is very busy. when you have a bunch of downloaders working in the background, the UI should have much less lag now - the _options->speed and memory_ page has a full pass. the thumbnail, image, and image tile caches now have their own sections, there is some more help text, and the new but previously hardcoded 10%/25% cache and prefetch limits are now settable and have dynamic guidance text that says 'about a 7,245x4,075 image' as image cache options change - all the cache options on this page now apply instantly on dialog ok. no more client restart required!
- other stuff, mostly specific niche work: - last week's v441->442 update now has a pre-run check for free disk space. users with large sessions may need 10GB or more of free space to do the conversion, and this was not being checked. I will now try to integrate similar checks into all future large updates - fixed last week's yandere post parser link update--the post url class should move from legacy moebooru to the new yandere parser correctly - the big maintenance tasks of duplicate file potentials search and repository processing will now take longer breaks if the database is busy or their work is otherwise taking a long time. if the client is cluttered with work, they shouldn't accidentally lag out other areas of the program so much - label update on ipfs service management panel: the server now reports 'nocopy is available' rather than 'nocopy is enabled' - label update on shortcut: 'open a new page: search page' is now '...: choose a page' - fixed the little info message dialog when clicking on the page weight label menu item on the 'pages' menu - 'database is complicated' menu label is updated to 'database is stored in multiple locations' - _options->gui pages->controls_ now has a little explanatory text about autocomplete dropdowns and some tooltips - migrate database dialog has some red warning text up top and a small layout and label text pass. the 'portable?' is now 'beneath db?' - the repositery hash_id and tag_id normalisation routines have two improvements: the error now shows specific service_ids that failed to lookup, and the mass-service_hash_id lookup now handles the situation where a hash_id is mapped by more than one service_id - repository definition reprocessing now corrects bad service_id rows, which will better heal clients that previously processed bad data - the client api and server in general should be better about giving 404s on certain sorts of missing files (it could dump out with 500 in some cases before) - it isn't perfect by any means, but the autocomplete dropdown should be a _little_ better about hiding itself in float mode if the parent text input box is scrolled off screen - reduced some lag in image neighbour precache when the client is very busy - . - boring code cleanup: - removed old job status 'begin' handling, as it was never really used. jobs now start at creation - job titles, tracebacks, and network jobs are now get/set in a nicer way - jobs can now store arbitrary labelled callable commands, which in a popup message becomes a labelled button - added some user callable button tests to the 'make some popups' debug job - file import queues now have the ability to discern 'master' Post URLs from those that were created in multi-file parsing - wrote the behind the scenes guts to create a new downloader page programmatically and start a subscription 'gap' query download - cleaned up how different timestamps are tracked in the main controller next week I am now on vacation for a week. I'm going to play vidya, shitpost the limited E3, listen to some long music, and sort out some IRL stuff. v444 should therefore be on the 23rd. I'll do some more cleanup work and push on multiple local file services. Thank you for your support!
>>4421 >Thank you for your support! Best wishes OP and beware of the fucking normies.
(57.00 KB 174x1002 ClipboardImage.png)
>about 200 pages open >set all subscriptions to run in the background (which creates one page per subscription) >forget about it >once it gets over the page threshold it starts creating page warning for every page it tries to create over the threshold >creates too many warnings and client crashes Turns out python can't into too many windows.
I would love if the "pages" drop down menu actually listed all the open pages, with submenus of "pages of pages", so you could click one to have the main view switch to that page. It could be put under the top "X pages open" menu option, for example. If you have tons of pages open it's real slow and cumbersome to try to find a specific one by scrolling with the little arrows to the right of the tabs. In fact, that is probably the slowest and most user-unfriendly thing I do in Hydrus. I can't imagine this would be a ton of effort to add, and it would help so much with usability.
>>4423 solution: migrate off python ez
rule34.xxx now has an anti-ddos thing where it "checks your browser" before letting you visit the website. Hydrus fails to download from there now, calling it an "unsolvable cloudflare issue". I tried copying rule34.xxx cookies to hydrus, but it still didn't work. Is it truly unsolvable? If so, is it perhaps because I don't have a rule34.xxx account? Does anyone with an account know?
I had a twitter subscription that missed a number of files from certain days. Other files from tweets on those same days from other subscriptions were downloaded fine. Now I'm paranoid I missed a bunch of tweets from other accounts as well, or that I will miss a lot more tweets in the future. Is there an easy way to check this? Like, attempt to download all files for all subscriptions in the last 10 days? Another weird thing is that the twitter gallery dl for that account only goes back a few days, while others go back many more days.
(37.67 KB 854x694 Untitled.png)
The "# messages [dismiss all]" notification at the bottom right of my hydrus stays on top no matter what I do, unless I minimize hydrus entirely. It wasn't always like this, but it happened at some point today. The only thing I can think of my having done to trigger this was locking my computer (windows 7) as I went afk to take a piss while hydrus was importing. I'm not sure if that was actually triggered it, but it is the only idea I have. I imported many things since booting the client on this version (442). Pic related.
>>4435 > Like, attempt to download all files for all subscriptions in the last 10 days? This is what the gallery downloader does.
(218.09 KB 349x611 20210612.png)
It looks like "Review your fate" is getting incorrect file number in the inbox. Or maybe now it adds the files from the trash to it.
>>4440 Yes, but my problem is that it doesn't. I've checked it to show new files and files already in inbox and files already in archive but it doesn't. >>4442 I've noticed it doesn't always update immediately.
>>4435 I couldn't even get Twitter downloader to download anything, I had to go through the Nitter downloader.
How do you use system predicates in advanced searches? The advanced search box doesn't bring up the drop-down menu for system predicates like the normal one, and inputting the system predicates normally seems to treat them as a normal tag instead of a special system tag, so it doesn't actually perform the search right.
Hydrus keeps popping up this error when I add tags through the manage tags window. It has me a little worried that my tags might be getting corrupted, but from the little bit I understand here, I think it's just related to the window itself and not the tags. RuntimeError Internal C++ object (ListBoxTagsMediaTagsDialog) already deleted. File "hydrus/client/gui/lists/ClientGUIListBoxes.py", line 1829, in mouseDoubleClickEvent action_occurred = self._Activate( shift_down ) File "hydrus/client/gui/ClientGUITagSuggestions.py", line 80, in _Activate self._activate_callable( tags, only_add = True ) File "hydrus/client/gui/ClientGUITags.py", line 2652, in AddTags self.EnterTags( tags, only_add = only_add ) File "hydrus/client/gui/ClientGUITags.py", line 2670, in EnterTags self._EnterTags( tags, only_add = only_add ) File "hydrus/client/gui/ClientGUITags.py", line 2498, in _EnterTags self._tags_box.SetTagsByMedia( self._media ) File "hydrus/client/gui/lists/ClientGUIListBoxes.py", line 3602, in SetTagsByMedia self.SetTagsByMediaResults( media_results ) File "hydrus/client/gui/lists/ClientGUIListBoxes.py", line 3623, in SetTagsByMediaResults self._DataHasChanged() File "hydrus/client/gui/lists/ClientGUIListBoxes.py", line 1099, in _DataHasChanged self._SetVirtualSize() File "hydrus/client/gui/lists/ClientGUIListBoxes.py", line 1695, in _SetVirtualSize self.setWidgetResizable( True )
>>4438 My gallery downloader page finished what it was doing, so instead of adding more, I decided to close hydrus to update to latest. But it's been on "hydrus client exiting" for 50 minutes now. For now I'll leave it to waste my time, but I don't believe it's ever taken this long to close for me before.
>>4454 It's been two hours of it frozen on this screen, so I am going to force terminate it, update, then relaunch.
Not sure whether this has come up before, but I'm wondering if there could be a booru-style "preview-size" file service for web viewing, perhaps something parallel to the exiting thumbnail system. It's painful to have to download numerous >=5MB archival quality png files for web or phone viewing. If they could be resized to an 800 to 1000px wide jpg file and accessible by the client API, that would make full-sized images come up much faster and with less bandwidth consumption on clients such as animeboxes, and the option would still be there to download/view the full-sized image. Since client files are stored in f00 to fff and thumbnails are stored in t00 to tff, previews could be stored in p00 to pff, and the whole thing could be optional.
>>4418 still have not figured this out unfortunately.
>>4432 pasting solution from discord >heliosxx: I've been getting cloudflare errors when fetching from rule34.xxx Came here looking for answers, went fooling with network->downloader components->manage url classes then look for the rule34.xxx file page entry and click edit scroll down and find send referral url and change it to always use converter referral url Seems to have fixed it for me
>>4466 For some reason, it just started working again a couple days ago. I didn't change anything.
hydrus absolutely dies when trying to load many files, in the hundred thousand range. As expected. It's even worse when you try to add a tag to all this files as a batch. can something be done that allows the user to apply tags to a bunch of files in a search, without loading the actual files? For example, if I search feet, and it returns 200k results, I can add a tag to all those 200k results without actually trying to get hydrus to load all of them at once so it doesn't kill the client? By the way, a dynamic loading feature would be nice as well, instead of trying to load all results at once.
The suit tag has a 'meta:junk tag' next to it... what should I be using instead of 'suit'?


Quick Reply
Extra
Delete
Report

no cookies?