/t/ - Technology

Discussion of Technology

Index Catalog Archive Bottom Refresh
Options
Subject
Message

Max message length: 8000

Files

Max file size: 32.00 MB

Max files: 5

Supported file types: GIF, JPG, PNG, WebM, OGG, and more

E-mail
Password

(used to delete files and postings)

Misc

Remember to follow the Rules

The backup domain is located at 8chan.se. .cc is a third fallback. TOR access can be found here, or you can access the TOR portal from the clearnet at Redchannit 2.0.

/vhs/ ATOMIC Movie Night, Part 2
Dr. Strangelove - The Children - Six String Samurai
5pm PDT - 7pm CDT - 8pm EDT
Watch Here


8chan.moe is a hobby project with no affiliation whatsoever to the administration of any other "8chan" site, past or present.

Board Nuking Issue should be resolved. Apologies for any missing posts.

(4.11 KB 300x100 simplebanner.png)

Hydrus Network General #4 Anonymous Board volunteer 04/16/2022 (Sat) 17:14:57 No. 8151
This is a thread for releases, bug reports, and other discussion for the hydrus network software. The hydrus network client is an application written for Anon and other internet-fluent media nerds who have large image/swf/webm collections. It browses with tags instead of folders, a little like a booru on your desktop. Advanced users can share tags and files anonymously through custom servers that any user may run. Everything is free, privacy is the first concern, and the source code is included with the release. Releases are available for Windows, Linux, and macOS. I am the hydrus developer. I am continually working on the software and try to put out a new release every Wednesday by 8pm EST. Past hydrus imageboard discussion, and these generals as they hit the post limit, are being archived at >>>/hydrus/ . If you would like to learn more, please check out the extensive help and getting started guide here: https://hydrusnetwork.github.io/hydrus/
https://www.youtube.com/watch?v=nShSEUBKe3o windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v481/Hydrus.Network.481.-.Linux.-.Executable.tar.gz I had a great week. Lots of different small jobs done. notes and hover windows I'm happy with last week's work making notes show in media viewers, but I introduced some little bugs while rewriting hover windows. I have now fixed the bad text colour behind the top hover, the problem where clicking on tags or greyspace was propagating up to the archive/delete and duplicate filters, the bad hover panel colour on non-default stylesheets, and some note window position and size issues. Also, for notes, you can now right-click them to collapse them in the hover window. Right-click again on the name to expand again. This is a test, really, just to see if it helps navigating files with many long notes. Double-clicking on the note tab in the edit dialog lets you rename, and a checkbox under the new options->notes now lets you choose whether the text caret starts at the beginning or end of the document when editing. Furthermore, I have updated all the icon buttons in all the hovers to no longer take focus when you click on them. They were previously stealing arrow key and space after a click (to do button-to-button form navigation), which meant you couldn't click on, say, a duplicate filter action button and then go back to arrow keys to navigate. Now you should be able to mix clicks and arrow keys without trickery. If this affects you, let me know how it goes! other highlights If you didn't like the recent 'ctrl- and shift-clicks no longer show files in the preview viewer' change, check out the new checkboxes under options->gui pages. You can make either click type focus for all files again or just files with no duration--if you don't want noisy videos being annoying while you ctrl-click. The 'advanced mode' autocomplete dropdown now has two 'OR' buttons. The left one opens a new empty OR edit dialog, the right one opens the advanced text parsing input as before. full list - fixes and improvements after last week's hover and note work: - fixed the text colour behind the top middle hover window - stopped clicks on the taglist and hover greyspace being duplicated up to the main canvas (this affected the archive/delete and duplicate filter shortcuts) - fixed the background colour of the hover windows when using non-default stylesheets - fixed the notes hover window--after having shown some notes--could then lurk in the top-left corner when it should have been hidden completely - cleaned up some old focus test logic that weas used when hovers were separate windows - rewrote how each note panel in the new hover is stored. a bunch of sizing and event handling code is less hacked - significantly improved the accuracy of the 'how high should the note window be?' calculation, so notes shouldn't spill over so much or have a bunch of greyspace below - right- or middle-clicking a note now hides its text. repeat on its name to restore. this should persist through an edit, although it won't be reflected in the background atm. let's see how it works as a simple way to quickly browse a whole stack of big notes - a new 'notes' option panel lets you choose if you want the text caret to start at the beginning or end of the document when editing - you can now double-click a note tab in 'edit notes' to rename the note. some styles may let you double-click in note greyspace to create a new note, but not all will handle this (yet) - as an experiment, all the buttons on the media viewer hover windows now do not take focus when you click them. this should let you, for instance, click a duplicate filter processing button and then use the arrow keys and space to continue to navigate. previously, clicking a button would focus it, and navigation keys would be intercepted to navigate the 'form' of the buttons on the hover window. you can still focus buttons with tab. if this affects you, let me know how this goes! - . - misc: - added checkboxes to _options->gui pages_ to control whether ctrl- and shift- selects will highlight media in the preview viewer. you can choose to only do it for files with no duration if you prefer - the 'advanced mode' tag autocomplete dropdown now has 'OR' and 'OR*' buttons. the former opens a new empty OR search predicate in the edit dialog, the latter opens the advanced text parser as before - the edit OR predicate panel now starts wider and with the text box having focus - hydrus is now more careful about deciding whether to make a png or a jpeg thumbnail. now, only thumbnails that have an alpha channel with interesting data in it are saved to png. everything else is jpeg - when uploading to a repository, the client will now slow down or speed up depending on how fast things are going. previously it would work on 100 mappings at a time with a forced 0.1s wait, now it can vary between 1-1,000 weight - just to be clean, the current files line on the file history chart now initialises at 0 on your first file import time - fixed a bug in 'if file is missing, remove record' file maintenance job. if none of the files yet scanned had any urls, it could error out since the 'missing and invalid files' directory was yet to be created - linux users who seem to have mpv support yet are set to use the native viewer will get a one-time popup note on update this week just to let them know that mpv is stable on linux now and how to give it a go - the macOS App now spits out any mpv import errors when you hit _help->about_, albeit with some different text around it - I maybe fixed the 'hold shift to not follow a dragged page' tech for some users for whom it did not work, but maybe not - thanks to a user, the new website now has a darkmode-compatible hydrus favicon - all file import options now expose their new 'destination locations' object in a new button in the UI. you can only set one destination for now ('my files', obviously), but when we have multiple local file services, you will be able to set other/multiple destinations here. if you set 'nothing', the dialog will moan at you and stop you from ok-ing it. - I have updated all import queues and other importing objects in the program to pause their file work with appropriate error messages if their file import options ever has a 'nothing' destination (this could potentially happen if future after a service deletion). there are multiple layers of checks here, including at the final database level - misc code cleanup - . - client api: - added 'create_new_file_ids' parameter to the 'file_metadata' call. this governs whether the client should make a new database entry and file_id when you ask about hashes it has never seen before. it defaults to false, which is a change on previous behaviour - added help talking about this - added a unit test to test this - added archive timestamp and hash hex sort enum definitions to the 'search_files' client api help - client api version is now 31 next week Next week is cleanup. Nothing too exciting, but I'd like to break the database code up a bit more.
Is there a way to set the media viewer to use integer scaling (I think that's what it's called) rather than fitting the view to the window, so that hydrus chooses the highest zoom where all pixels are the same size and the whole image is still visible. My understanding is that nearest neighbor is a lossless scaling algorithm when the rendered view size is a multiple of the original, otherwise you get a bunch of jagged edges from the pixels being duplicated unevenly. It looks like Hydrus only has options to use "normal zooms" (what you set manually in the options? I'm confused by this), always choosing 100% zoom, or scaling to canvas size regardless of if that's with a weird zoom level (like 181.79%) that causes nearest-neighbor to create jagged edges.
When I deleted a file in Hydrus, how sure can I be that it is COMPLETELY gone? Are there any remnants that are left behind?
>>8156 yeah all the metadata for the file (tags and urls and such) are still there. There isn't currently a way to remove that stuff.
>>8154 Yeah, under options->media, and the filetype handling list, on the edit dialog is 'only permit half and double zooms'. That locks you to 50%, 100%, 200%, 400% etc... It works ok for static gifs and some pngs, if you have a ton of pixel art, but I have never really liked it myself. Set the 'scale to the canvas size' options to 'scale to the largest regular zoom that fits', I think that'll work with the 50/100/200/400 too. Let me know if it doesn't. >>8156 >>8157 Once the file is out of your trash, it will be sent to your OS's recycle bin, unless you have set in options->files and trash to permanently delete instead. Its thumbnail is permanently deleted. In terms of the file itself, it is completely gone from hydrus and you are then left with the normal issues of deleting files permanently from a disk. If you really need to remove traces of it from the drive, you'll need a special program that repeatedly shreds your empty disk sectors. In terms of metadata, hydrus keeps all other metadata it knows about the file. Information like the file's hash (basically its name), its resolution, filesize, a perceptual hash that summarises how it looked, and tags it has, ratings you gave it, URLs it knows the file is at, and when it was deleted. It may have had some of this information before it was imported (e.g. its hash and tags on the PTR) if you sync with the public tag repository. Someone who accessed your database and knew how hydrus worked would probably be able to reconstruct that you once imported this file. There are no simple ways to tell the client 'forget everything you ever knew about this file' yet. Hydrus keeps metadata because that is useful in many situations. Deletion records, for instance, help the downloader know to not re-import something your previously deleted. That said, I am working on a system that will be able to purge file knowledge on command, and other related database-wide cleanup of now-useless definition records, but it will take time to complete. There are hundreds of tables in the database that may refer to certain definitions. If you are concerned about your privacy (and everyone should be!), I strongly recommend putting your hydrus database inside an encrypted container, like with veracrypt or ciphershed or similar software. If you are new to the topic, do some searching around on how it works and try some experiments. If you are very desperate to hide that you once had a file, I can show you a basic hack to obscure it using SQLite. Basically, if you know the file's hash, you go into your install_dir/db folder, run the sqlite3 executable, and then do this: (MAKE A BACKUP FIRST IN CASE THIS GOES WRONG) .open client.master.db update hashes set hash = x'0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef' where hash = x'06b7e099fde058f96e5575f2ecbcf53feeb036aeb0f86a99a6daf8f4ba70b799'; .exit That first hash, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", should be 64 characters of random hex. The second should be the hash of the file you want to obscure. This isn't perfect, but it is a good method if you are desperate.
I just updated to the latest version, and there seems to be a serious (well, seriously annoying, but not dangerous) bug where frames/panels register mouse clicks as being higher up when you scroll down, as if you didn't scroll down. It's happening with the main tag search box drop down menu, and also in the tag edit window where tags are displayed and you can click on them to select them. I'm on Linux.
>>8159 Sorry, yeah, I messed something up last week doing some other code cleaning. I will fix it for next week and add a test to make sure it doesn't happen again. Sorry for the trouble. I guess I don't scroll and click much when I dev or use the client IRL.
>>8159 >on Linux I confirm that.
>>8159 I've got this problem on windows as well. Also, am I the only one experiencing extremely slow PTR uploads? Now instead of uploading 100 every 0.1 seconds, it is more like 1-4 every 0.1s
>>8164 i'm also getting this error when uploading to the PTR v481, win32, frozen StreamTimeoutException Connection successful, but reading response timed out! Traceback (most recent call last): File "urllib3\connectionpool.py", line 426, in _make_request File "<string>", line 3, in raise_from File "urllib3\connectionpool.py", line 421, in _make_request File "http\client.py", line 1344, in getresponse File "http\client.py", line 307, in begin File "http\client.py", line 268, in _read_status File "socket.py", line 669, in readinto File "urllib3\contrib\pyopenssl.py", line 326, in recv_into socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "requests\adapters.py", line 439, in send File "urllib3\connectionpool.py", line 726, in urlopen File "urllib3\util\retry.py", line 410, in increment File "urllib3\packages\six.py", line 735, in reraise File "urllib3\connectionpool.py", line 670, in urlopen File "urllib3\connectionpool.py", line 428, in _make_request File "urllib3\connectionpool.py", line 335, in _raise_timeout urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\networking\ClientNetworkingJobs.py", line 1460, in Start response = self._SendRequestAndGetResponse() File "hydrus\client\networking\ClientNetworkingJobs.py", line 2036, in _SendRequestAndGetResponse response = NetworkJob._SendRequestAndGetResponse( self ) File "hydrus\client\networking\ClientNetworkingJobs.py", line 710, in _SendRequestAndGetResponse response = session.request( method, url, data = data, files = files, headers = headers, stream = True, timeout = ( connect_timeout, read_timeout ) ) File "requests\sessions.py", line 530, in request File "requests\sessions.py", line 643, in send File "requests\adapters.py", line 529, in send requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='ptr.hydrus.network', port=45871): Read timed out. (read timeout=60) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\gui\ClientGUI.py", line 318, in THREADUploadPending service.Request( HC.POST, 'update', { 'client_to_server_update' : client_to_server_update } ) File "hydrus\client\ClientServices.py", line 1206, in Request network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1872, in WaitUntilDone raise self._error_exception File "hydrus\client\networking\ClientNetworkingJobs.py", line 1643, in Start raise HydrusExceptions.StreamTimeoutException( 'Connection successful, but reading response timed out!' ) hydrus.core.HydrusExceptions.StreamTimeoutException: Connection successful, but reading response timed out!
(27.33 KB 835x522 2022-04-17 150515.png)

(714.79 KB 1457x934 2022-04-17 150704.png)

(4.31 KB 1231x34 2022-04-17 150838.png)

Apologies if the answer is already somewhere on the /hydrus/ board somewhere, I hadn't been able to quite find it, yet. I'm wondering how to make hydrus be able to download pictures from 8chan (using hydrus companion) when direct access results in a 404? I was assuming some fuckery with cookies but sending the cookies from 8chan trough hydrus companion to hydrus client seemingly made no difference
>>8166 afaik there's no way to import directly from urls of "protected" boards, but I'd love to be proven wrong.
>>>/hydrus/17585 >Is there a way to automatically add a file's filename to the "notes" of a Hyrdrus file when importing? Some of the files have date info or window information if they are screenshots and I'd like to store that information somehow. If not, is there some other way to store the filenames so that they can be easily accessible after importing? >>>/hydrus/17586 >>notes >I think notes are for when there's a region of an image that gets a label (think gelbooru translations), it's not the best thing for your usecase. The best way would be to have them under a "filename" namespace. I'm not either of these people, but a filename namespace is useless if the filename cares about case. Hydrus will just turn it all into lowercase. In those scenarios I've had to manually add the filename to the notes for each one... painful. Also, somewhat related: hydrus strips the key from mega.nz urls, so I have to manually add those to notes as well. More pain. >>8166 Have you tried giving hydrus your user-agent http header as well as the cookies?
>>8174 >Have you tried giving hydrus your user-agent http header as well as the cookies? No I haven't, however I'm still quite inexperienced when it comes to using hydrus so I don't really know how I'd be able to do that. Using the basic features of hydrus companion is pretty much as far as my skillset goes atm. Would you please kindly explain how I might do what you had described?
Trying to add page tags to my imported files is turning out to be an even bigger headache than I expected. The page namespace doesn't specify what it is a page of, so you can end up with multiple contradictory page tags. For example, an artist uploads sets of 1-3 images frequently to his preferred site, but posts larger bundles less frequently to another site. Or he posts a few pages at a time of a manga in progress, and when it's finished he aggregates all the pages in a single post for our convenience. Either way, you can end up with images that have two different page tags, both of which are technically correct for a given context, but the tags themselves don't contain enough information to tell which context they're correct in. If I wanted to be really thorough, I could make a separate namespace for each context a page can exist in, but then I'd be creating an even bigger headache for myself whenever I want to sort by pages. The best I can imagine would be some kind of nested tag system, so you can specify the tags "work:X" and "page:Y(of work:X)", and then sort by "work-page(of work)". As an added bonus, it would make navigation a lot smoother in a lot of contexts. For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter.
>>8183 Hydrus sucks at organizing files that are meant to be a sequential series. This has been a known problem for a long time unfortunately.
>>8183 >For example, if you notice an image tagged with chapter:1 and you want to see the rest of that particular chapter. You may use kinda nested namespaces: 1 - namespace:whatever soap opera you want (to identify the group) 2 - namespace:chapter 1 (to identify the sub-group) 3 - namespace:chapter 1 - page 01 (to identify the order) So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. Done.
>>8190 >So searching for "whatever soap opera you want" will bring you all related files, then add the chapter to narrow the files, and then sort those files by namespace number. At that point you're basically navigating folders in a file explorer, just more clumsy. That's exactly what I was trying to get away from when I installed hydrus.
I had a great week of simple work. I fixed some bugs--including the scrolled taglist selection issue--and improved some quality of life. The release should be as normal tomorrow.
>>8192 >At that point you're basically navigating folders in a file explorer What are you talking about? In Hydrus all files are in a centralized directory and searched with a database. I understand the hassle to tag manually, but not software is clairvoyant and reads your mind about what exactly you are searching for.
>>8813 if ordered sets are important to you installing danbooru is an option, they do put their source up on github. Last I tried it it was a pain in the ass to get working but I did eventually get it. Though it did lack a number of hydrus features I've gotten used to.
>>8183 Hydrus works off of individual files. It can adapt it to multi-file works, but the more robust of a solution you need the more you’ll butt up against Hydrus’ core design. The current idiomatic solution of generic series, title, chapter, page, etc. namespaces works for 90% of things (with another 9% of things being workable by ignoring all but one context), but if you need a many to many relationship the best you can do is probably use bespoke namespaces for each collection (e.g. “index of X:1” “index of Y:2”) and then use the custom namespace sort to view the files in whatever context you've defined. I guess an ease of use that could get added would be an entry in the tag context menu to sort by namespace. That way you wouldn't need to type it out every time.
>>8197 >That way you wouldn't need to type it out every time. In the future drag and drop tags may be the solution.
I want to remove the ptr from my database. Is there a way to use the tag migration feature to migrate tag relationships only for tags used in my files? You can do it with the actual tags, but I don't see an option to do something similar for relationships, and I'd rather not migrate over thousands of parents/children and siblings for tags I'll never see.
>>8195 You have to add multiple search terms to narrow it down to something useful, similar to how a file explorer requires you to navigate through several subdirectories to get to what you want. And for moving from chapter 1 to chapter 2, you need to remove one search term and add another. I like how hydrus allows me to pick exactly the search term I want, no matter how broad or narrow, and with the right tags and the right namespace sorting rules, sorts everything in view into logical sets and logical sequences within those sets. Maybe I should give a more concrete example of how I manage my stuff. Say an artist uploads both to pixiv and pixiv fanbox. For both services, a post often contains several images in a specific sequence. So I subscribe to both and set the downloader to tag images with the numerical id of the post the image was pulled from (namespace "post id:"), the image's index within all the images in the post (namespace page:), and the service it was pulled from (namespace site:). Then I just have to search for the artist and set namespace sorting to "site-post id-page", and everything works great. But then the artist uploads the same image to both services, and suddenly I have an image with two post id tags and two page tags. Quickest solution would be to have one version of each namespace for each site, then my sorting rule would look like "site-fanbox post id-pixiv post id-fanbox page-pixiv page". Looks ugly, but it does the job. If I only ever downloaded from those two services, I could deal with it, but with all the different sites I download from, my sorting rules become a huge fucking mess. I would probably be fine with any quick hack that allows me to define unique namespaces that get treated as the same namespace for the purpose of sorting (for example, "post id(site:pixiv)" and "post id(site:fanbox)" are treated as if they're just "post id"). Wouldn't sort reliably in every context, but would be good enough for my purposes. However, the dream would be if (assuming sorting rule is "site-post id") it first sorts by site, and then looks for a "post id(*):" tag, where * is the site it sorted by. Unfortunately I don't know enough about databases or sorting to tell how feasible something like this would be.
>>8166 Looks like you need to send the referral URL with your request. The 8chan.moe thread downloader that comes with hydrus already takes care of that, so I assume you're trying to download individual files or something? I think the proper thing here would be for the hydrus companion to attach the thread you found the image in as the referral URL, but I'm not sure if the hydrus API even supports that at the moment. So failing that, you can give 8chan.moe files an URL class and force hydrus to use https://8chan.moe/ as the referral URL for them when no other referral URL is provided. Hopefully this won't get you banned or anything.
https://www.youtube.com/watch?v=PGEZutQ-tCM windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v482/Hydrus.Network.482.-.Linux.-.Executable.tar.gz I had a great week doing cleanup and other simple work. highlights I fixed the problem where clicks on a scrolled taglist were going to the wrong location. I was cleaning up some ancient wx->Qt code hacks and it seems I rarely scroll and click when working, so I never noticed the problem. I have a new test to make sure this does not happen again. Sorry for the trouble! The URLs in the top-right hover menu are now styled better. No longer underlined, and now colourable by QSS. I have updated all the default stylesheets that come with the client (you can set these under options->style) to have some decent colours. If you have your own custom QSS, check my default to see how to set it yourself. You can now set duplicate action options to 'always archive both files', if you want to play with making the duplicate filter do some of the work of the archive/delete filter. Also, the duplicate filter now has improved image prefetch. There should be less flickering when you switch from A to B the first time and when you action a pair and move to the next. Please note that if you still get flicker for 4k images, try boosting the image cache size under options->speed and memory (I boosted the default up to 384MB this week, so you might like to give it some more too). full list - misc: - fixed the stupid taglist scrolled-click position problem--sorry! I have a new specific weekly test for this, so it shouldn't happen again (issue #1120) - I made it so middle-clicking on a tag list does a select event again - the duplicate action options now let you say to archive both files regardless of their current archive status (issue #472) - the duplicate filter is now hooked into the media prefetch system. as soon as 'A' is displayed, the 'B' file will now be queued to be loaded, so with luck you will see very little flicker on the first transition from A->B. - I updated the duplicate filter's queue to store more information and added the next pair to the new prefetch queue, so when you action a pair, the A of the next pair should also load up quickly - boosted the default sizes of the thumbnail and image caches up to 32MB and 384MB (from 25/150) and gave them nicer 'bytes quantity' widgets in the options panel - when popup windows show network jobs, they now have delayed hide. with luck, this will make subscriptions more stable in height, less flickering as jobs are loaded and unloaded - reduced the extremes of the new auto-throttled pending upload. it will now change speed slower, on less strict of a schedule, and won't go as fast or slow max - the text colour of hyperlinks across the program, most significantly in the top-right media hover window, can now be customised in QSS. I have set some ok defaults for all the QSS styles that come with the client, if you have a custom QSS, check out my default to see what you need to do. also hyperlinks are no longer underlined and you can't 'select' their text with the mouse any more (this was a weird rich-text flag) - the client api and local booru now have a checkbox in their manage services panel for 'normie-friendly welcome page', which switches the default ascii art for an alternate - fixed an issue with the hydrus server not explicitly saying it is utf-8 when rendering html - may have fixed some issues with autocomplete dropdowns getting hung up in the wrong position and not fixing themselves until parent resize event or similar - . - code cleanup: - about 80KB of code moved out of the main ClientDB.py file: - refactored all combined files display mappings cache code from the code database to a new database module - refactored all combined files storage mappings cache code from the code database to a new database module - refactored all specific storage mappings cache code from the code database to a new database module - more misc refactoring of tag count estimate, tag search, and other code down to modules - hooked up specific display mappings cache to the repair system correctly--it had been left unregistered by accident - some misc duplicate action options code cleanup - migrated some ancient pause states--repository, subscriptions, import&export folders--to the newer options structure - migrated the image and thumbnail cache sizes to the newer options structure - removed some ancient db and dialog code from the retired dumper system next week I want to catch up on some github issues and do a little more multiple local file services work.
(18.35 KB 871x737 meme collection.png)

I hope collections will be expanded upon in the future. It's very nice to be able to group together images in a page, but often I want an overview of the individual images of a group. Right now I have to right click a group and pick open->in a new page, which is awkward. Here's a quick mock-up of how I'd like it to work. Basically, show all images, but visually group them together based on the selected namespaces.
>>8203 >I assume you're trying to download individual files or something? Yes, kinda... I'm using hydrus companion's right-click -> hydrus companion -> send to hydrus I'm browsing threads which I don't want to watch but contain a few select pictures I'd still like to save I tried looking into your suggested solution but I'm still very inexperienced using hydrus and have had so far no luck setting up an url class for 8chan.moe files, I'll keep trying in the meantime, just wanted to give you an update on what I was trying to do. On an unrelated note I did some digging and found probably what it is exactly that's the problem. Please do not be fooled. I am no expert. Far from it. I was just lucky enough to know about inspect element and compared the direct and indirect links plus used some googling. I must reiterate that despite of what it may seem, I am a complete noob at this and anything related to this. I do not possess knowledge or skill necessary to understand probably 90% of instructions you might throw at me, if they're not in a step-by-step format. That's not a demand btw, just a cautionary word. I appreciate all the support that I can receive. Anyway, now with that disclaimer out of the way, here's what I found. Comparing "request headers" under the network section of inspect element of the 404 with the 304 I found 2 things of note: Referrer Policy: strict-origin-when-cross-origin and sec-fetch-site: same-origin or sec-fetch-site: none googling it allowed me some insight as to what the 8chan administration did to achieve this frustrating but unfortunately necessary situation. As far as I can tell this "sec-fetch-site" is filled out by the application (in this case chrome) to it's liking. So all hydrus would need to do is when requesting 8chan.moe files to use the "sec-fetch-site: same-origin" No idea if whatever I just explained even had any use to any of you, or if you already knew all of this already, but I thought it better to share what info I have instead of withholding it. The bane of all customer support amirite? (No pictures this time because of login cookies and other identifiable info being vehemently present)
>>8210 The png I posted contains the URL class. Just go to network > downloaders > import downloaders and drag and drop the image from >>8203
Any way to stop hydrus from running maintenance (in my case ptr processing) while it's downloading subscriptions? I think that should prevent maintenance mode from kicking in. It always happen when I start Hydrus and leave it to download subs, because i have idle at 5 minutes. The downloads slow to a crawl because ptr processing is hogging the database. I could raise the time to idle but i still want it that low once hydrus has finished downloading subs...
Is any way to export the notes, like the file and tags? Something like: File: test.jpg Tags: test.jpg.txt Notes: test.jpg.notes.txt
>>8219 I get the impression that notes are a WIP feature. Personally I'm hoping we'll get the option to make the content parser save stuff as notes soon.
(5.19 KB 402x62 ClipboardImage.png)

>>8212 Bruh
>>8221 seems like you're not on the latest version
Are there plans to add dns over https support to hydrus? Most browsers seem to have that feature now, so it'd be cool if hydrus did too.
How do I enable a web interface for my Hydrus installation, so others can use it by my external IP? I need something simple like hydrus.app, but unfortunately it refuses to work with my external IP, only accepts the localhost, even though I enabled non-local access in API and entering my external IP in browser opens the same API welcome page as with localhost. Who runs that app, anyway, where do I see support for it?
>>8164 >>8165 Thank you for these reports. I added some pending committing auto-throttling in 481 so instead of always going for 100 rows, it could go 1-1,000 depending on how fast your machine and the PTR was doing. It seems to have backfired for some people. For 482, I capped the limits at 25-500, and I increased the tolerance of the test and reduced the acceleration. It should be less spiky while still responding to a slow database or busy PTR, but I'll be interested to know what you get. As for the read timeout on the PTR, that's more odd. Maybe the PTR was super super busy when you were talking to it, but 60 seconds without a response seems extreme. This error is essentially harmless, so don't worry too much, please just try again later. Let me know if you still get it this week and in future. It may be the result of my auto-throttling, it may just have been the PTR being super busy one day, or it might be something else. If it keeps happening, I'll write a hook for 'the PTR is busy atm, try again later' or similar. >>8174 Your thoughts on filenames have similar parallel with the 'title' tag, which I was very keen on when I started hydrus but I now generally think has been a failure. Tags are good for searching, not describing. I'd like more notes import/export support, along with the recently added Client API support, so we can play with it more for richer descriptive metadata. For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. I originally added that checkbox for a Mega supporting experiment, although I don't see anything on the github here https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders so I am not sure how well that ended up going. If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. >>8200 Ah, yeah, sorry, I don't have a nice way to filter siblings or parents by files you have yet. This has come up before, I remember now, and I'd like to add it. I recommend you migrate all the siblings and parents now, and in future when a filtering operation becomes available you can do it then. Some things will still be slow, like the edit sibs/parents dialog, but actually applying siblings and parents will be super fast since you won't have all the PTR mappings to work on.
>>8209 Thanks. Yeah, this is exactly what I want to do too. I am in the midst of a long rewrite to clean up some bad decisions I made when first making the thumbnail grid, and as I go I am adding more selection and display tools. Once things are less tangled behind the scenes, I will be able to write a 'group by' system like this, both the data structure behind and the new display code needed. Unfortunately it will take time, but I agree totally. >>8216 There's no explicit way at the moment. I have generally been comfortable with both operations working at the same time, since I'm generally ok if subs run at, say, 50% speed. I designed subs to be a roughly background activity and don't mean for them to run as fast as possible. If your machine really struggles to do both at once though, maybe I can figure out a new option. I think your best shot in the meantime, since PTR processing only works in idle time but subs can run any time, is to tweak the other idle mode options. The mouse one might work, if you often watch your subs come in live, or the 'consider the system busy if CPU above' might work, as that stops PTR work from starting if x cores are busy. If you are tight on CPU time anyway, that could be a good test for other situations too. You can also just turn off idle PTR processing and control it manually with 'process now' in services->review services. I don't like suggesting this solution as it is a bit of a sledgehammer, but you might like to play with it. >>8219 >>8220 Yeah, not yet, but more import/export options will come. If you know scripting, the Client API can grab them now: https://hydrusnetwork.github.io/hydrus/client_api.html https://hydrusnetwork.github.io/hydrus/developer_api.html
>>8223 For advanced technical stuff like that, I am limited by the libraries I use. My main 'go get stuff' network library is called 'requests', a very popular python library https://docs.python-requests.org/en/latest/ although for actual work I think it uses the core urllib3 python library https://pypi.org/project/urllib3/ . So my guess is when python supports it and we upgrade to that new version of python, this will happen naturally, or it will be a flag I can set. I searched a bit, and there might be a way to hack it in using an external library, but I am not sure how well that would work. I am not a super expert in this area. Is there a way of hacking this in at the system level? Can you tell your whole OS to do DNS lookups on https with the new protocol in the same way you can override which IP to use for DNS? If this is important to you, that might be a way to get all your software to work that way. If you discover a solution, please let me know, I would be interested. Otherwise, I think your best simple solution for now is to use a decent VPN. It isn't perfect, but it'll obscure your DNS lookups to smellyfeetbooru.org and similar from your ISP.
>>8232 The various web interfaces are all under active development right now. All are in testing phases, and I am still building out the Client API, so I can't promise there are any 'nice' solutions available right now. All the Client API tools are made by users. Many hang out on the discord, if you are comfortable going there. https://discord.gg/wPHPCUZ The best place to get support otherwise is probably on the gitlab/github/whatever sites the actual projects are hosted on, if they have issue trackers and etc.. For Hydrus.app I think that's here https://github.com/floogulinc/hydrus-web I'm not sure why your external IP access isn't working. If your your friend can see the lady welcome page across the internet, they should be able to see the whole Client API and do anything else. Sometimes http vs https can be a problem here.
>>8233 >If you make a nice URL Class for Mega, I'd be interested in seeing it--it would probably be a good thing to add to the defaults, just on its own as an URL the client recognises out of the box. Is it even possible to download mega links through hydrus? I've been using mega.py for automating mega downloads, and looking at the code for that, it seems quite a bit more complicated than just sending the right http request. https://github.com/odwyersoftware/mega.py/blob/master/src/mega/mega.py#L695 I'd love to be proven wrong, but looks to me like this is a job for an external downloader. Speaking of which, any plans to let us configure a fallback options for URLs that hydrus can't be configured to handle directly? At very least, I want to be able to save URLs for later processing.
>>8237 >Is it even possible to download mega links through hydrus? No. #fragment text is never sent to a server, so it won't work in a traditional URL. Mega use clientside javascript or their add-on to read the fragment text and convert that into navigation commands in their client. Eventually that gets converted into whatever clever streaming download system they actually have. If you want to download Mega links, I recommend megatools or jdownloader. Just copy/paste from hydrus. Or if you want to browse, click on the link in the top-right hover of hydrus's media viewer to open it up in your browser, but bear in mind that #fragment text will often not survive a normal OS call, so you'll need to set an explicit browser executable path under options->external programs. To save a mega link in hydrus, you'll basically have to set it manually with 'manage urls', although I know some users are working on downloaders and Client API tools that will associate these URLs automatically. For native hydrus support, in the future, I'd like to have an 'exe manager' that says like 'this exe is called ffmpeg, it is here, and with these commands it will convert a webm to an mp4', for all sorts of external exes, waifu2x or youtube-dl, or indeed jdownloader. Then I can write a hook for that into URL Classes or whatever and automatically send a mega URL to an external downloader and pick up the downloaded files later for import, all natively in the client. This will be some time off though, so you'll have to do it manually for now.
>>8238 My problem is that some of the galleries I subscribe to might occasionally contain external links. For example, some artists uploading censored images, but also attaching a mega or google drive link containing the uncensored versions. I can easily set up the parser to look for these URLs in the message body and pursue them, but if hydrus itself doesn't know how to handle them, they get thrown out. Would be nice if these URLs could be stored in my inbox in some way, so I can check if I want to download them manually or paste them into some other program. Even after you implement a way to send the URL to an external program (which sounds great), it would be useful to see what URLs hydrus found but didn't know what to do with, so the user can know what URL classes they need to add.
>>8233 >For Mega URLs, try checking the 'keep fragment when normalising' checkbox in the URL Class dialog. That should keep the #blah bit. Oh wow, I never knew what that option did. Thanks! I made url classes. Note: one of the mega url formats (which I think is an older format) has no parameters at all, it's just "https://mega.nz/#blah". So if you just give it the url "https://mega.nz/" it will match that url. Kind of weird, but not really a huge issue. >>8184 I mean, that's not really particular to hydrus. It's true for almost any booru.
Hey, After exiting the duplicate filter I was greeted with two 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' v482, linux, source AttributeError 'NoneType' object has no attribute 'GetHash' Traceback (most recent call last): File "/opt/hydrus/hydrus/core/HydrusPubSub.py", line 138, in Process callable( *args, **kwargs ) File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3555, in ProcessApplicationCommand self._GoBack() File "/opt/hydrus/hydrus/client/gui/canvas/ClientGUICanvas.py", line 3120, in _GoBack for hash in ( first_media.GetHash(), second_media.GetHash() ): AttributeError: 'NoneType' object has no attribute 'GetHash' I'm running the AUR version, if you need any more info let me know.
Is it just me or are URL classes needlessly restrictive? Forcing every URL to either be a gallery, a post or a file seems to create more issues than it solves. A post on kemono.party contains a link to a google drive folder, so all I need to do is parse it as a pursuable URL and let the google drive downloader handle the rest, right? Except google drive folder URLs count as gallery URL, and you can only pursue post and file URLs. Okay, I'll parse it as a next gallery page instead. Except you can only do that from a gallery parser, not from a post parser. That leaves two solutions. One, change kemono.party posts to count as galleries so that they're allowed to direct to other gallery URLs. That fucks with URL association, since you're only allowed to set associated URLs from post URL parsers. Second, change the URL class of google drive folders so that they count as post URLs (with multiple files) so that post URL parsers are allowed to pursue them. This breaks the google drive folder parser, because it's no longer allowed to go to next gallery page. Hold on, what if I also change next gallery page to be pursuable URLs? Not intuitive at all, but it actually does seem to work so far. As far as I can tell, the only reason to set something as a gallery URL is if you want it to be able to direct to other gallery URLs, or if you want to make use of URL parameters to find the next page. But jesus christ what a headache it was to figure all of this out while navigating between the URL class manager, the parser manager, and the download page file log. I'm guessing some of these restrictions are there to prevent people from accidentally configuring a parser that requests the next page ad infinitum, but there has to be a better way. I also have a sneaking suspicion that the dev only really downloads off boorus and designed the system around that, and that features like sub-gallery pages and "post page can produce multiple files" option had to be tacked on later to support other use cases.
could the downloading black/white list be adjusted to work on matching a search, rather than just specific tags? There's a lot of kinds of posts I'd rather not download, but most of the time they aren't simple enough to be accurately described with a single tag.
I was ill for the start of the week and am short on work time. Rather than put out a slim release, I will spend tomorrow doing some more normal work and put the release off a week. 483 should be on the 4th of May. Thanks everyone! >>8246 Sorry, I messed up some duplicate logic that will trigger on certain cases where it wants to back up a pair! This is fixed in 483 along with more duplicate filter code cleanup, please hang in there.
>>8260 Get well anon.
>>8239 For now, I think your best bet is to tell the parser to add these URLs as ''url to associate (source url)'. rather than 'url to download/pursue'. It will attach these google drive or mega or whatever links to the file as a known url, and if you have a matching URL Class like in >>8240 you'll see them nicely named in the media viewer top-right hover, but it won't download them yet. In future, when we get support (or there's a Client API solution, whatever), we'll scan the database for all the URLs of the URL classes we now support and do them retroactively. >>8240 Thank you, I will add these! >>8248 I am sorry for the trouble. When I next do a network overhaul, I would like to add more tools here. You are correct that my main fear here is to stop loops or crazy big searches. I don't want a google folder that parses ten google folders that parse a pixiv artist link that then grabs 3,000 files that grabs several other external links that splay out into a handful of deviant art tag searches by accident and so on. You are also right that I build the system for boorus originally (and some gallery sites like hentai foundry or deviant art), hence the gallery/post system. Since the downloader engine is locked into this atm, everything we have done since has been working with these fundamental objects, so the more a site deviates from that model, the more shaky hydrus is with it. Maybe I can define a 'folder tree' downloader object in a big future update, something more akin to jdownloader or a torrent client resolving a magnet link, and rather than automatic download, it instead parses the tree and presents you a summary in some new UI so you can choose what to download. I am not totally sure yet though, since that would be a ton of work and meaty, usually human-triggered actions like 'download 3.2GB from this Mega' are already well handled by other software. I would also, in the next overhaul, like to unify the edit UI in general. Jumping between the different dialogs, and the general nightmare of nested dialogs when editing parsers, I'd like to clean most of it up. Also, a highly requested feature in downloaders is downloader versioning. The update system is a complete nightmare. Just a lot of work. I am not sure when it will happen. I want to finish the multiple local file services system and then do some tag repository admin/janny workflow improvements, which will probably take me into Q3 of this year. Then I'll be free to do some other 'big' work. Most likely something to do with file relationships, since that is most popular, and then I think downloader versioning is not far behind. So, while not trying to be too optimistic or pessimistic, I hope I may be seriously planning at least some of this early/mid 2023. >>8257 Not yet, but perhaps in the future. I am planning more metadata filtering tools in future, and it would be nice to unify that with other hardcoded rules we have at the moment like 'do not download a gif > 32MB'. What sort of searches are you thinking of--something with a lot of OR clauses? Or something like 'nothing of this character by this artist'? Bear in mind that while I can expand post-download filtering too, I usually only know the tags of a file when I run the tag filters. I sometimes know the filesize and filetype right as I start a download, but I can't do something like 'veto files less than 5 seconds long' and stop the download early to save you bandwidth. >>8262 Thanks m8, doing great now. Keep on pushing.
Is there an (easy) way to extract the data used to make the file history chart into a CSV? I'd like to play around with that data myself.
Is there a way to exclude downloading files from a specific Booru/Gallery site? I want to make it so that I don't download my files from Pixiv when I use the feature that looks up a file on SauceNao and IQDB and sends the link to Hydrus. For Pixiv, I don't want to download my files from there since the tags are in Japanese, and are few in number compared to other sites like Gelbooru. This should be the easiest solution to this issue, though another solution would be to have another downloader option that specifically only searches IQDB, rather than having to use Saucenao and IQDB together, since that option always prioritizes downloading from Pixiv.
Minor bug report: hovering over tags while in the viewer and scrolling with the mouse wheel causes the viewer to move through files as if you were scrolling on the image itself. May be related to the bug from a few weeks ago.
I had a good couple of weeks. There are a variety of small fixes and quality of life improvements and the first version of 'multiple local file services' is ready for advanced users to test. The release should be as normal tomorrow.
>>8326 hello mr dev I just found out about this software and from reading the docs I have only this to say: based software based dev long live power users
Hey h dev, moveing to a new os soon, along with whatever happened recently in hydrus made video more stable so I can parse it. I know I asked about this a while ago, having a progress bar permanently under the video as an option, im wondering if that ever got implemented as an option or if it's something you haven't gotten to yet? I run into quite a few 5 second gifs next to 3 minute long webm's and me hovering the mouse over them takes up a non insignificant amount of the video, at least enough that I have to move the mouse off of it just to move it back to scrub. thanks in advance for any response.
just want to confirm the solution for broken mpv from my half sloppy debian install like in this issue: https://github.com/hydrusnetwork/hydrus/issues/1130 as suggested, copying just the system libgmodule-2.0.so to Hydrus directory helps although the path may be different, because I have such files at /usr/lib/x86_64-linux-gnu/
https://www.youtube.com/watch?v=ymI1g2VjyCY windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v483/Hydrus.Network.483.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing some regular work and getting 'multiple local file services' ready for testing. multiple local file services This is not ready for everyone yet! Advanced users only for now please. I turned multiple local file services on in debug mode last week, just to see how things were looking, and it turned out suprisingly great, no big problems. For several months now I have been doing prep work for it, and that seems to have paid off. I decided to finish the last important things and get a v1.0 out. So, it is now possible to have multiple 'my files' services in your client, and to search, import to, and migrate files between them. These services are completely blind to each other, so searching for autocomplete tags in one will not return suggestions from another. The hope is this will allow fairly good sfw/nsfw-style separations in clients and open up interesting new contained workflows. I am recommending this only for advanced users for now, and moreso only those who have been following this feature. I have not yet written up nice help for this, and some of the UI/workflow is still not user friendly, so what I would like is for people who are enthusiastic to try it out and let me know what they think. I really haven't run into any massive errors, but I won't encourage you to go crazy on a real client yet. Go nuts on a new empty test client, or experiment carefully on a real client, just in case something goes wrong, and I will keep polishing the experience. The basics are: you can now make a new 'local file domain' in manage services. file import options now lets you import to different or multiple local file domains, and thumbnail right-click lets you copy or move files between them too. The normal search page dropdown lets you jump between local services just like searching trash, and of course it now supports multiple domains if you want to do a union. The delete and undelete commands are similarly a little more powerful when you start adding new services. Check out the changelog for more specific details. Next step I think is to make it more obvious when thumbnails/files are in certain services, since at the moment you have to scan the text on the status bar, top media hover, or thumbnail menu. Maybe custom icon rules (e.g. 'when the file is in "sfw" domain, give it a flower icon'). Then general polish like shortcut integration, maybe some more search tech, and then I really want to write a nice help document for it all to introduce normal experienced users to the idea, and some 'merge these clients' tech would be great, so users who have been using two or more clients for years can finally combine them into one. the rest This is a two week release because I was ill earlier on and it cut into my work time. So, there is a mix of different small work. Updated downloaders, reworked sibling&parent help with some neat new charts, fixes and improvements to the duplicate filter, some quality of life in UI labelling and texts. Nothing super important, but some things should be a bit smoother!
full list - multiple local file services: - the multiple local file services feature is ready for advanced users to test out! it lets you have more than one 'my files' service to store things, which will give us some neat privacy and management tools in future. there is no nice help for this feature yet, and the UI is still a little non-user-friendly, so please do not try it out unless you have been following it. and, while this has worked great in all testing, I generally do not recommend it for heavy use on a real client either, just in case something does go wrong. with those caveats, hit up _manage services_ in advanced mode, and you can now add new 'local file domain' services. it is possible to search, import to, and migrate files between these and everything basically works. I need to do more UI work to make it clear what is going on (for instance, I think we'll figure out custom icons or similar to show where files are), and some more search tech, and write up proper help, and figure out easy client merging so users can combine legacy clients, but please feel free to experiment wildly on a fresh client or carefully on your existing one - if you have more than one local file service, a new 'files' or 'local services' menu on thumbnail right-click handles duplicating and moving across local services. these actions will preserve original import times (e.g. if you move from A to B and then back to A), so they should be generally non-destructive, but we may want to add some advanced tools in future. let me know how this part goes--I think we'll probably want a different status than 'deleted from A' when you just move A->B, so as not to interfere with some advanced queries, but only IRL testing will show it - if you have a 'file import options' that imports files to multiple local services but the file import is 'already in db', the file import job will now examine if and where the file is still needed and send content update calls to fill in the gaps - the advanced delete files dialog now gives a new 'delete from all and send to trash' option if the file is in multiple local file domains - the advanced delete files dialog now fully supports file repositories - cleaned up some logic on the 'remember action' option of the advanced file deletion dialog. it also supports remembering specific file domains, not just the clever commands like 'delete and leave no record'. also this dialog no longer places the 'suggested' file service at the top of the radio button list--instead it selects that 'suggested' if there is no 'remember action' initial selection applicable. the suggested file service is now also set by the underlying thumbnail grid or media canvas if it has a simple one-service location context - the normal 'non-advanced' delete files dialog now supports files that are in multiple local file services. it will show a part of the advanced dialog to let you choose where to delete from - . - misc: - thanks to user submissions, there is a bit more help docs work--for file search, and for some neat new 'mermaid' svg diagrams in siblings/parents, which are automatically generated from a markup and easy to edit - with the new easy-to-edit mermaid diagrams, I updated the unhelpful and honestly cringe examples in the siblings and parents help to reflect real world PTR data and brushed up all the text in the top sections - just a small thing--the 'pages' menu and the page picker dialog now both say 'file search' to refer to a page that searches files. previously, 'search' or 'files' was used in different places - completely rewrote the queue code behind the duplicate filter. an ancient bad idea is now replaced with something that will be easier to work with in future - you can now go 'back' in the duplicate filter even when you have only done skips so far - the 'index string' of duplicate filters, where it says 53/100, now also says the number of decisions made - fixed some small edge case bugs in duplicate filter forward/backward move logic, and fixed the recent problem with going back after certain decisions - updated the default nijie.info parser to grab video (issue #1113) - added in a user fix to the deviant art parser - added user-made Mega URL Classes. hydrus won't support Mega for a long while, but it can recognise and categorise these URLs now, presenting them in the media viewer if you want to open them externally - fixed Exif image rotation for images that also have ICC Profiles. thanks to the user who provided great test images here (issue #1124) - hitting F5 or otherwise saying 'refresh' explicitly will now turn a search page that is currently in 'searching paused' to 'searching immediately'. previously it silently did nothing - the 'current file info' in the media window's top hover and the status bar of the main window now ignores Deletion reason, and also file modified date if it is not substantially different from another timestamp already stated. this data can still be seen on the file's right-click menu's expanded info lines off the top entry. also, as a small cleanup, it now says 'modified' and 'archived' instead of 'file modified/archived', just to save some more space - like the above 'show if interesting' check for modified date, that list of file info texts now includes the actual import time if it is different than other timestamps. (for instance, if you migrate it from one service to another some time after import) - fixed a sort error notification in the edit parser dialog when you have two duplicate subsidiary parsers that both have vetoes - fixed the new media viewer note display for PyQt5 - fixed a rare frame-duration-lookup problem when loading certain gifs into the media viewer - . - boring code cleanup: - cleaned up search signalling UI code, a couple of minor bugs with 'searching immediately' sometimes not saving right should be fixed - the 'repository updates' domain now has a different service type. it is now a 'local update file domain' rather than a 'local file domain', which is just an enum change but marks it as different to the regular media domains. some code is cleaned up as a result - renamed the terms in some old media filtering code to make it more compatible with multiple local file services - brushed up some delete code to handle multiple local file services better - cleaned up more behind the scenes of the delete files dialog - refactored ClientGUIApplicationCommand to the widgets module - wrote a new ApplicationCommandProcessor Mixin class for all UI elements that process commands. it is now used across the program and will grow in responsibility in future to unify some things here - the media viewer hover windows now send their application commands through Qt signals rather than the old pubsub system - in a bunch of places across the program, renamed 'remote' to 'not local' in file status contexts--this tends to make more sense to people at out the gate - misc little syntax cleanup next week Some small misc jobs and user-friendly-isation of multiple local file services.
>>8333 sounds great, with this I will be able to have Inbox Seen to parse Parse nsfw Parse sfw Archive nsfw Archive sfw if i'm able to search across everything, I get unfiltered results, but refine it down to specific groups outside of just a rating filter that would be great
>>8333 Does copying between local file services duplicate the file in the database?
Is it just me or is there a bug preventing files from being deleted in v483? I can send them to trash but trying to "physically delete" them doesn't work. Hitting delete with files selected does nothing, neither does right clicking and hitting "delete physically now".
(3.66 KB HydrusGraph.zip)

>>8317 Not an easy way, but attached is the original code that a user made to draw something very similar in matplotlib. If you adjust this, you could pipe it to another format, or look through the SQL to see how to extract what you want manually. My code is a bit complicated and interconnected to easily extract. The main call is here-- https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/db/ClientDB.py#L3098 --but there's a ton of advanced bullshit there that isn't easy to understand. If you have python experience, I'd recommend you run the program from source and then pipe the result of the help->show file history call to another location, here: https://github.com/hydrusnetwork/hydrus/blob/master/hydrus/client/gui/ClientGUI.py#L2305 I am also expecting to expand this system. It is all hacked atm, but as it gets some polish, I expect it could go on the Client API like Mr Bones recently did. Would you be ok pulling things from the Client API, like this?: https://hydrusnetwork.github.io/hydrus/developer_api.html#manage_database_mr_bones
>>8321 Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? I don't work on that, so I'm afraid I can't help there, but I have been thinking of adding a feature on the hydrus side to say 'never download this'. A bit like a tag blacklist, but instead of URL Classes, so in your case you'd say 'never download from pixiv'. I was mostly thinking of it in terms of 'this domain is broken currently' tech, but I'd expose it to the user too. However if you want to download from pixiv on other occasions this might not be helpful. >>8325 Thank you for this report! I think the scroll is ok as long as there is a scrollbar on the taglist that can move in that direction, but if the scrollbar is at the end, or there aren't enough tags to make a scrollbar, the scroll is being promoted up to the parent panel. I'll silence this. Let me know if you have any more trouble. >>8327 I'm glad you like it! Let me know if you run into any trouble, and once you have figured things out, I'd be interested to know what you found most easy and difficult to learn. The help docs and general onboarding is always out of date, and feedback from new users on that front is always helpful. >>8328 I haven't got to it yet, I'm afraid. There is a shortcut on the 'global' set that forces the scanbar to show, but this will always cover up the bottom part of the video. I have the same problem with the short gifs and moving my mouse over only to see it was 1.1s long anyway. For some stupid layout code reasons, it is actually a pain atm for me to support both the current hide/show and the animation bar hanging beneath the video. I was thinking, as a compromise, how about an option that says 'instead of hiding the scanbar, when the mouse isn't near it, just make it 3 pixels tall'. How does that sound? Then you'd always see it if you wanted, but it wouldn't take up much space. That'd better solve the problem in the meantime and give me time to fix some hellish layout code here in the background.
>>8330 Awesome, thank you. I will update the help to reference this specifically. >>8335 Yeah, I think my next step here is to make these sorts of operations easier. You can set up a 'search everything' right now by clicking 'multiple locations' in the file domain selector and then hitting every checkmark, but it should be simpler than that. ~Maybe~ even favourite domain unions, although that seems a bit over-engineered, so I'll only do it if people actually want it. Like I have 'all local files', which is all the files on your hard disk, I need one that is all your media domains in a nice union. Also want some shortcuts so people like you will be able to hit shift+n or whatever and send a file from inbox to your parse-nsfw domain super easy. As you get into this, please let me know what works well and badly for you. All the code seems generally good, just some stupid things like a logic problem when trying to open 'delete files' on trash, so now I just need to make the UI and workflow work well. >>8340 No, it only needs one copy of the file in storage. But internally, in the database, it now has two file lists. >>8356 Yes, sorry! Thank you for the report. This is just an accidental logic bug that is stopping some people from opening the dialog on trash--sorry for the trouble! I can reproduce it and will fix it. If you really want to delete from trash, the global 'clear trash' button on review services still works, and if you have the advanced file deletion dialog turned on, you can also short-circuit by hitting shift+delete to undelete and then deleting again and choosing 'permanently delete'.
First of all, thank you for all your hard work HydrusDev I have small feature request, now that we have multiple local services For the Archive/Delete filter, there should be keyboard shortcuts for "Move/Copy to Service X" as well as "Move to Trash with reason X" and "Delete Permanently with reason X" The latter two would be nice because having to bring up the delete dialog every time is kind of clunky
>>8361 >Is this feature to chase up links after SauceNao something on Hydrus Companion or similar? Yes, it is from Hydrus Companion, I forgot that it was a separate program since I started using it at the same time that I started using Hydrus. Now that I think about it though, just avoiding Pixiv probably isn't the best solution either, since there's plenty of content that can only be found on Pixiv. If there is a way to download the English translations of the tags, then that would mostly solve the issue, since I could then use parent/sibling tagging to align them with the other tags. I don't know how doable that would be though, so for now the best solution is probably to import a sibling tag file that changes all the Japanese pixiv tags to their English tags, assuming that someone has already made this.
>>8330 I was able to get it working by copying libmpv.so.1 and libcdio.so.18 from my old installation (still available on my old drive) to the hydrus installation folder.
I entered the duplicate filter, and after a certain point it wouldn't let me make decisions any more. I'd press the "same quality duplicate" button and it just did nothing. I exited the filter, then the client popped up a bunch of "list index out of range" errors. here's the traceback for one of them: v483, linux, frozen IndexError list index out of range File "hydrus/client/gui/ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3548, in ProcessApplicationCommand self._MediaAreTheSame() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3149, in _MediaAreTheSame self._ProcessPair( HC.DUPLICATE_SAME_QUALITY ) File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3259, in _ProcessPair self._ShowNextPair() File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3454, in _ShowNextPair self._ShowNextPair() # there are no useful decisions left in the queue, so let's reset File "hydrus/client/gui/canvas/ClientGUICanvas.py", line 3432, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ): I reentered the duplicate filter, and I got through a few more pairs before it stopped letting me continue again. It seems like it was on the same file as last time too. Could this bug have corrupted my file relationships?
>>8359 >Python script That'll help a lot, thanks! >Would you be ok pulling things from the Client API, like this? Yeah, definitely.
>>8361 a 3 pixel tall scan bar... that honestly wont be a bad option, my only concern would be the immediate visibility of it, and i'm not sure there is a good way to do that... would it be possible to have custom colors for it, both when its small and when its large? when its large that light grey with dark grey isn't a bad option, but small it would kind of be a constantly moving needle in the haystack. but if for instance, I had the background of the smaller bad be black with a marginally think red strip, I would only see that red strip move, this may not be a great option for everyone, but I could see various different colors for higher contrast being a good thing especially when its 3 pixels big. yea I think it's a great idea, it would make it readily available from the preview how long the video is and it would be so out of the way that nothing is massively covered up. if its an option would the size it is be changeable/user settable? its currently 60 pixels if my counting is right, but I could see something maybe 15 or so being something I could leave permanently visible, if it can't than it doesn't matter, but if its possible to make it an option I think this would be a fantastic middle ground till you give it a serious pass. anyway, whatever you decide on will help no matter what path it is.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
API "file_metadata" searches seem to be giving the wrong timestamp for the "time_modified" of some files, conflating it with time imported. The client itself displays the correct time modified, regardless of whether the file was imported straight from the disc or whether the metadata of a previously imported file had its source time updated after being downloaded again from a booru. Querying the API's search_files method by "modified time" does give the correct file order (presumably because the list of ID's from the client is correct), but the timestamp in the metadata is still equal to "import time". For some reason, this doesn't always happen, but unfortunately I haven't been able to determine why.
Sorry for the double post. Verification was acting up.
>>8367 This issue isn't just with the one pair now. It's happened with multiple pair when trying to go through the filter. And it's not just happening when I mark them as same quality. It also happens when I mark them as alternates. I also noticed that when this bug happens, the number in the duplicate filter (the one that's like "13/56") jumps up a bunch
I had an ok week. I fixed some bugs (including non-working trash delete, and an issue with the new duplicate filter queue auto-skipping badly), improved some quality of life, and integrated the new multi-service 'add/move file' commands into the shortcuts system and the media viewer. The release should be as normal tomorrow. >>8367 >>8396 Thank you for this report, and sorry for the trouble! Should be fixed tomorrow, please let me know if you still have any problems with it.
Are sorting/collection improvements on the to-do list? I sometimes have to manually sort video duplicates out and being able to collect by duration/resolution ratio and sort by duration and then by resolution ratio would be extremely helpful. Sorting pages by total filesize or by smallest/largest subpage could have some uses as well, but that might be too autistic for other users.
https://www.youtube.com/watch?v=OtPsKtUyGxg windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v484/Hydrus.Network.484.-.Linux.-.Executable.tar.gz I had an ok week. I fixed some things, improved some quality of life, and made internal file migration a bit easier. highlights Last week's debut of multiple local file services went well! As far as I know, no one who tried it out had any big problems, and my main concerns--mostly that it needs some better migration tools and workflows and 'this file is in here' UI--proved true. So, I know what I have to do and will keep working. Multiple local file services remains for advanced users for now, but I hope to launch it properly for everyone, with nice help, next week. However, while doing this work, I did accidentally break the simple version of the 'delete files' dialog when files were in the trash--rather than say 'delete these permanently?', it just wouldn't appear. This was due to a logical oversight where it wasn't testing and counting up 'trash' status correctly. It is fixed now. Also, there was a problem with the new duplicate filter queue for users who have done a good bit of processing. A certain function that in complicated situations automatically skips some pairs was failing whenever it hit the end of a batch. This is also fixed now, thank you for the great reports on this. For multiple local file services, I updated the UI code, fixing some little bugs and improving the workflow when you have complicated situations, and I integrated the shortcuts system and the media viewer. You can now create 'add/move to service x' actions in the 'media' shortcut set, and the media viewer has the same add/move menu on right-clicks. The media viewer has several other improvements: I think I fixed that annoying bug where a fullscreen borderless view of media that exactly fits the screen would sometimes not resize when you went back to normal window mode! Also, scrolling the mouse over the taglist hover window should no longer ever cause a 'previous/next media' event. And I have implemented a 'short and simple' version of the video/audio scanbar to show (instead of completely hiding it) when your mouse is away--just a few pixels to show things 'at a glance'. Even though it covers a few pixels of video at the bottom, I liked this so much that I set it as the default for all users. If you don't like it, you can hide it again with the new setting under options->media. full list - misc: - fixed the simple delete files dialog for trashed files. due to a logical oversight, the simple version was not testing 'trashed' status and so didn't see anything to permanently delete and would immediately dump out. now it shows the option for trashed files again, and if the selection includes trash and non-trash, it shows multiple options - fixed an error in the 'show next pair' logic of the new duplicate filter queue where if it needed to auto-skip through the end of the current batch and load up the next batch (issues #1139, #1143) - a new setting on _options->media_ now lets you set the scanbar to be small and simple instead of hidden when the mouse is moved away. I liked this so much personally it is now the default for all users. try it out! - the media viewer's taglist hover window will now never send a mouse wheel event up to the media viewer canvas (so scrolling the tags won't accidentally do previous/next if you hit the end of the list scrollbar) - I think I have fixed the bug where going on the media viewer from borderless fullscreen to a regular window would not trigger a media container resize if the media perfectly fitted the ratio of the fullscreen monitor! - the system tray icon now has minimise/restore entries - to reduce confusion, when a content parser vetoes, it now prepends the file import 'note' with 'veto: ' - the 'clear service info cache' job under _database->regenerate_ is renamed to 'service info numbers' and now has a service selector so you can, let's say, regen your miscounted 'number of files in trash' count without triggering a complete recount of every single mapping on the PTR the next time you open review services - hydrus now recognises most (and maybe all) windows executables so it can discard them from imports confidently. a user discovered an interesting exe with embedded audio that ffmpeg was seeing as an mp3--this no longer occurs - the 'edit string conversion step' dialog now saves a new default (which is used on 'add' events) every time you ok it. 'append extra text' is no longer the universal default! - the 'edit tag rule' dialog in the parsing system now starts with the tag name field focused - updated 'getting started/installing' help to talk more about mpv on Linux. the 'libgmodule' problem _seems_ to have a solid fix now, which is properly written out there. thanks to the users who figured all this out and provided feedback - . - multiple local file services: - the media viewer menu now offers add/move actions just like the thumb grid - added a new shortcut action that lets you specify add/move jobs. it is available in the media shortcut set and will work in the thumbnail grid and the media viewer - add/move is now nicer in edge cases. files are filtered better to ensure only local media files end up in a job (e.g. if you were to try to move files out of the repository update domain using a shortcut), and 'add' commands from trashed files are naturally and silently converted to a pure undelete - . - boring code cleanup: - refactored the UI side of multiple local file services add/move commands. various functions to select, filter, and question the user on actions are now pulled to a separate simple module where other parts of the UI can also access them, and there is now just one isolated pipeline for file service add/move content updates. - if a 'move' job is started without a source service and multiple services could apply, the main routine will now ask the user which to use using a selector that shows how many files each choice will affect - also rewrote the add/move menu population code, fixed a couple little issues, and refactored it to a module the media viewer canvas can use - wrote a new menu builder that can place a list of items either as a single item (if the list is length 1), or make a submenu if there are more. it drives the new add/move commands and now the behind the scenes of all other service-based menu population next week Next week is a cleanup week, so I will do some boring code cleanup and see if I can write some nice introductory help for the multiple local file services system. I have four more weeks before my vacation, so I am aiming to have the big work of multiple local file services finished by then.
>>8409 >>8377 Nice, the scan bar is far more visible than I thought it may have been. I think there is the possibility that other colors may also help legibility but for me its just fine as is.
ok h dev, probably my last question for a while, I have so far parsed thought about 5000-10000 "must be pixel dups" I have yet to find one where I have ever decided 'lets keep the one with the larger file size' I have decided, at least for the function of exact dupes, i'm willing to trust programs judgement is there any automation in the program for these yet? from what I can see a few of my subscriptions are generating a hell of alot of these, and even then, I had another 50000 to go though, if there is a way to just keep the smaller file and yeet the larger with the same settings I have assigned to 'this is better' this would be amazing. I dont recall if anything has been added to hydrus yet, I would never trust this for any speculative match as I constantly get dups that require hand parsing with that, but holy shit is it mind numbing to go though pixel dups... scratch that, when I have all files, I have 325k must be pixel dupes (2 million something potential dups, so this isn't a case of the program lagging behind options)
(34.93 KB 1920x1080 help.png)

Can't seem to do anything with these files. I can't delete them, and setting a job to remove missing or invalid files doesn't touch them. They don't have URLs so I can't easily redownload them either. What do?
>>8418 Note, they do have tags, sha256sums, and file IDs, but nothing else as far as I can tell. If I manage to redownload one by searching for each file manually based off the tags it appears and can be deleted. Maybe I could do some sqlite magic and remove the records via the file IDs using the command line, but I don't know how. The weird thing is how they appear in searches. They don't show up when I search only system:everything, but do show up when searching for tags that the missing file is tagged with. I tried adding a dummy tag to all of my working files and searching with -dummy, and the missing files didn't show up. If I search some tag that matches a missing file and use -dummy, the missing files that are tagged with whatever other tag I used to search do show up. Luckily all of these files had a tag in common so I can easily make a page with all of the missing files, 498 total. I can open the tag editor for these, and adding tags works but I cannot search for tags that only exist on missing files (I tried adding a 'missing file' tag, can't search it). Nothing interesting in the logs, unless I try to access one which either gives KeyError 101 or a generic missing file popup. Hydev, if you're interested in a copy of my database folder, I could remove most of the large working files and upload a copy somewhere if you want to mess with it. I'm open to trying whatever you want me to if that's more convenient though.
Got this error when after updating. (def jumped multiple versions, not sure how much) Manually checking my files seems that all of them are fine. It's just that hydrus can't seem to make sense of it for some reason...? FYI my files are on a separate hdd and my hydrus installation is on an ssd. Neither are on the same drive as my OS
>>8363 Thanks. I agree. I figured out the move/add internal application commands for 484, so they are ready to be integrated. 'Delete with reason x' will need a bit of extra work, but I can figure it out, and then I will have a think about how to integrate it into archive/delete and what the edit UI of this sort of thing looks like. Ideally, although I doubt I will have time, it would be really nice to have multiple archive/delete filters. >>8364 Yeah, this sounds tricky. Although it is complex, I think your best bet might be to personally duplicate and then edit the redirection scripts or tag parsers involved here. You may be able to edit the hydrus pixiv parser to grab the english tags (I know we used to have this option as an alternate parser, but I guess it isn't available any more? maybe pixiv changed how this worked?), or change whatever is parsing SauceNao, although I guess that is part of Hydrus Companion. EDIT: Actually, if your only solid problem with pixiv is you don't want its japanese tags, hit up network->downloaders->manage default tag import options, scroll down to 'pixiv file page api' and 'pixiv manga_big page' and set specific defaults there that grab no tags. Any hydrus import page that has its tag import options set to 'use the current defaults' will then default to those, and not grab any tags. >>8366 Thank you! >>8376 Thanks. I'll make a job to expose this data on the Client API.
>>8377 >>8413 I'm glad. I am enjoying it too in my IRL use. I thought it would be super annoying, but after a bit of use, it just blends into my view and is almost unconsciously useful. Just FYI: The options are a ugly debug/prototype, but you can edit the scanbar colours now. Hit up install_dir/static/qss and duplicate 'default_hydrus.qss'. Then edit your duplicate so the 'qproperty' values under HydrusAnimationBar have different hex colour values. Load up the client, switch your style to your duplicated qss file, and the scanbar should change colour! If you already use a QSS style, then you'll want to copy the custom HydrusAnimationBar section to a duplicate of the QSS style file you use and edit that. >>8379 Thank you, I will investigate this. I was actually going to try exposing all the modified timestamps on the Client API and the client, not just the aggregate value, so I will do this too, and that will help to figure out what is going on here. >>8408 I would like to do this. It can sometimes be tricky, but that's ok--the main problem is I have a lot of really ugly UI code behind the scenes that I need to clean up before I can sanely extend these systems, and then when I extend them I will also have to update the UI to support more view types. It will come, but it will have to wait for several rounds of code cleaning all across the program before I dive properly back in here. Please keep reminding me. Sorting pages themselves should be easier. You can already do a-z name and num_files, so adding total_filesize should be ok to do. I'll make a job. >>8417 Thanks. There is no automation yet, but this will be the first (optional) automated module I add to the duplicate filter, and I strongly expect to have it done this year. I will make sure it is configurable so you can choose to always get rid of the larger. Ideally, this will process duplicates immediately upon detection, so the client will negotiate it and actually delete the 'worse' file as soon as file imports happen.
>>8418 >>8428 Thanks, this is odd, but it may be completely explainable. Can you check the 'file domain' (left button) of the tag autocomplete dropdown of those search pages. Does it say 'my files' or 'all known files'? Given you can re-download these, it sounds like these are previously deleted files. If you right-click on one and hit the top item so it expands out to all the info lines and timestamps, does it say something like 'deleted from my files 3 months ago' or similar? I'm actually going to write about this a bit more this week as I do the multiple local file services help, but hydrus doesn't technically care if a file is in a domain or not--as long as the client has once heard of its hash, it can add tags or ratings or urls to it. This is the core of how the PTR works. If a file is in the client, then it can draw a thumbnail, otherwise, it draws that default hydrus icon and a red border. Normally, you never see these 'non-local' files, since when you search, you are limited to the 'my files' domain, so you filter hydrus's knowledge only to the files you have on disk, but if your file domain on that search page is 'all known files' or another advanced search, then they may have been exposed. If you see these on 'my files' or 'trash' or 'all local files', then something is definitely going wrong. >>8430 I am very sorry, this error means it is extremely likely that you have had some hard drive damage and your database files (on your SSD) have been damaged. Sometimes these errors are severe (hard drive dying), but often they are trivial (just a bit of extra junk data after a rough powercut). It may be the update routine walked over a damaged area and set a flag. Your next step is to check "install_dir/db/help my db is broke.txt". This document will talk all about it and your next steps to ensure your data is safe and start recovery. Normally this error would point you to that file, but it seems to have happened at an inconvenient moment for you and the error handling isn't clever enough to figure it out. Let me know if you need any help, I'm happy to go one on one to help fix or recover from anything serious.
>>8446 Missing files anon here, it said "my files". I should have mentioned this in my first post, but I had to restore my database from a backup a while back and these first appeared then. I'm assuming they were in the database when I backed it up, but had been deleted in between making the backup and restoring it. I fucked around with file maintenance jobs and managed to fix it. It didn't work the first time because "all media files" and/or "system:everything" wasn't matching the missing files. The files did all have a tag in common that I didn't care to remove from my working files, and for some reason this tag would match the missing files when searched for. I ran the maintenance search on that tag and did the job, and now they're gone.
>>8446 >>8447 Actually, scratch that. The job was able to match the files and reported them as missing, put their sha256sums into a file in the database folder, and made them vanish from the page that had the tag searched, but refreshing it shows that they weren't actually removed and I still encounter them when searching for other tags. Not sure what to do now.
Hello. Is there a way to make sure that when scraping tags, the imaged that were deleted aren't going to be downloaded again?
Can someone help me? Since the last 3 releases Hydrus has been pretty much unusable for me. Having it open after a while it ends up on (not responding) and it can stay that way for hours or until i force close it. I asked on the discord but no one has replied me (I can't complain tho they have helped me a lot in the past) I have a pretty decent PC. R7 1700, 32GB of RAM and I have the main files on a NVM drive and the rest on a 4TB HDD. Please help, I haven't been able to use Hydrus for almost a month.
Trying to download by Pixiv bookmarks, but everytime I enter the url "https://www.pixiv.net/en/users/numbers/bookmarks/artworks" I get an error saying "The parser found nothing in the document". Only trying to grab public bookmarks and I've got Hydrus Companion setup with the API key. Not sure what I'm doing wrong, unless there's some alternate URL I'm supposed to use for bookmarks.
could you change the behavior of importing siblings from a text file so that if a pair would create a loop with siblings you already have, it just asks if you want to replace those pairs you already have that would be part of the loop with the ones from the file? The way it works now, there's no way to replace those siblings with the ones from the file except for manually going through each one yourself, but that defeats the purpose of importing from a file. This would be an exception in the case of you clicking "only add pairs, don't remove" but that's okay because the dialog window would ask you first. As it is right now, the feature is unfortunately useless for my purposes, which is a shame because I thought I finally found a solution for an issue with siblings I've been having for a while. A real bummer.
I had a good simple week. I cleaned some code, improved some quality of life, and made multiple local file services ready for all users. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=AKgjOCuW_MU windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v485a/Hydrus.Network.485a.-.Linux.-.Executable.tar.gz I had a good week. The multiple local file services system is now ready for all users. multiple local file services I have written some proper help for this new system to talk about what it is and how to use it. The basic idea is you can now have more than one 'my files', which lets you compartmentalise things for privacy or workflow reasons. The help is here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html All users can try this out--you no longer have to be in advanced mode--but in terms of experience level, I recommend it to people who are at least comfortable with tag siblings and parents. This system is fundamentally feature complete. The outstanding immediate problems are that it file location doesn't show up in UI very well yet, the Client API should plug into it better, and it needs some en masse controls to do large file migrations and client merging. I hope to work on these in the coming weeks. If you give this a go, let me know what you think! full list - multiple local file services: - multiple local file services are now available for everyone! you no longer need to be in advanced mode to create them. all are welcome, but in terms of skill level, I most recommend it for users who are comfortable with tag siblings and parents - the tl;dr: you can now have more than one 'my files', which lets you put things in isolated locations - I wrote a proper help document on multiple local file services--what they are, how they work, my recommendations, and a bit of extra info about hydrus file search in general, right here: https://hydrusnetwork.github.io/hydrus/advanced_multiple_local_file_services.html - file searches in 'multiple locations' on large clients are now massively faster in almost all situations. the only place multiple location searches are still slow is whenever the duplicates system (system:file relationships) comes into play - . - misc: - in the page tab menu, you can now sort pages by total file size - the 'force system:limit for all searches' option is moved from the 'speed and memory' to 'search' panel - when files download from sites, if the raw file is served by cloudflare and has a timestamp radically different to a parsed source time, that CF timestamp is saved under a different domain rather than overwriting the original domain timestamp. this seemed to affect danbooru on about 1 in 10-20 files. note this does not change much at the moment, but when you can see and sort on individual domain modified dates, this should improve the sort - updated the 'installing' help to talk about bad install locations for the database. network locations are bad, and thanks to user reports, we now know USB drives can be bad if the database is busy when the OS goes to sleep - if a 'database is malformed' error occurs on boot, the client now recognises it and points the user to 'install_dir/db/help my db is broke.txt' for the next steps - . - boring code cleanup: - another 60KB or so of code pulled out of ClientDB.py: - created a new database module for url mappings and refactored various fetch and update routines to it - created a new database module for some rich file metadata and refactored some file filtering, history, and status testing code to it - created a new database module for file searching and moved all tag-based file searching code to it - moved several other misc methods down to database modules next week I am behind on my github bug reports and lots of other small work, so I will chip away at these. Thanks everyone!
I'm pretty new to using this but, is there a way to tag a media with a gang of niggers tag without including its parent tags?
I'm looking to use an android app (or equivalent) that lets me manage (archive/delete) my collection hosted on a computer within a local network, so say if I had no internet I could still use it. Is this a thing? Is there a program that will do this? The available apps out there are a bit confusing as to what their limitations or features are.
Is it possible to download pics from Yandex Images with Hydrus, or can someone suggest a good program that can? Thanks.
is there a setting to make it so hydrus adds filenames as tags by default, such as when importing local files?
>>8453 Isn't that the default behavior of downloaders? Make sure "exclude previously deleted files" is checked. Or are you trying to add tags to files you've already deleted without redownloading them? I don't know if you can do that. >>8468 If you want to give something a tag without including its parent tags, it sounds like that tag shouldn't have those parent tags in the first place. >>8487 Import folders can do that. You can just have a folder somewhere that you can dump files in, and you can set hydrus to periodically check it and do things like add the filename or directory as tags.
Is there a way to download tags and other things from a parser even if the parser can't find a file to download? There are a bunch of images on e621 that I downloaded a long time ago but I didn't download the tags. Since then the artist has had almost all their images taken off of e621. Even though the images have been down, the tags are still there. Example: https://e621.net/posts/1292060 The images have the e621 url in their known urls, but if I try to download the url with hydrus it just says that it can't find anything to download. Even if "force file downloading even if url recognised" is unchecked, it won't add the tags to the file already in the db. Maybe this could be a file import option. Call it "if post url recognised, ignore failure to find file" or something.
>>8446 The cloning process seems to have worked in the sense that the integrity checks now pass. However now I get this message when I boot up hydrus. Is it safe to proceed or am I in deeper shit?
>>8447 >>8452 Thank you, this is odd. It feels like your different file services have somehow become desynced, so 'my files' has a different file list to 'all local files'. Like with 'all media files' not grabbing the orphan file records. If you make sure help->advanced mode is on, and then change the file domain from 'my files' to 'all local files', do the ghost files still show up? If not, that suggests yes there is a desync here. There is a special command for this, but it is old and I don't know how well it works in the new multiple local file service era. Please make a backup before you try this, in case it goes wrong. Then give database->db maintenance->clear orphan file records a go. It should give you some info. >>8453 >>8488 Yeah, this is default. The option is under the file import options button of any downloader. Defaults for these options are under options->importing. >>8454 When you run the program, can you check your install_dir/db folder for me? Do the different temporary .db-wal files grow very large, like 800MB+? I am chasing down a bug related to this that sounds a bit like your problem. Otherwise, please bear with the lag for a bit and hit up help->debug->profiling. There is a 'what is this?' menu entry there that explains how it works. pastebin or email me the profile log and I will see what is running so slow for you. Quick things to try: 1) if you have hundreds of pages or hundreds of download queries, reduce the size of your session 2) pause tag sibling/parent background sync maintenance under tags->sibling/parent sync.
>>8455 I am not a pixiv user IRL so I can't talk too intelligently, but hydrus is only set up to parse certain URLs. Typically that is stuff like an artist's gallery homepage, like this: https://www.pixiv.net/en/users/67138065 That URL you posted, is that your favourites on Pixiv? Hydrus would have to be taught how to parse your favourites, which I don't think it does by default. The community repository has some downloaders here that look good: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders/Pixiv So, if you download that newer bookmarks png: https://raw.githubusercontent.com/CuddleBear92/Hydrus-Presets-and-Scripts/master/Downloaders/Pixiv/pixiv%20bookmarks%20-%202020-11-23.png And import it via network->downloaders->import downloaders (drag and drop it on Lain), maybe it will work? Sorry if I can't help more. >>8460 Sure, thank you. I'll figure out some yes/no dialogs to change the import behaviour to a sort of 'overwrite'. >>8468 >>8488 Yeah, parents are not optional. They are supposed to apply to definitional relationships, like a 'car' is always a 'vehicle'. If you really hate the parents that, say, the PTR gives, you can change what applies where under tags->manage where tag siblings and parents apply.
>>8475 They are mostly under development right now. Some are better than others. Actual 'management' is limited, mostly they do read-only search atm, but the tools will expand in future. I assume you have been here to see the list, but if not: https://hydrusnetwork.github.io/hydrus/client_api.html#browsers_and_tools_created_by_hydrus_users Hydrus Web is your best bet if you are looking for a booru-style interface. Normally you use a site to load the interface, but if you want a local network solution, you can spin up a Docker instance, if you have that support. An alternative--this sounds stupid but I know a few guys who do it to great effect--is to just run a VNC app through your tablet, maybe with a hotkey overlay set up for your hydrus shortcuts, and then just tap to go through your archive/delete filter on the cough. Since you are on a local network, you have all the bandwidth you need for smooth VNC. >>8476 Not by default, and I'm afraid I don't see a user-made downloader at the community repository here: https://github.com/CuddleBear92/Hydrus-Presets-and-Scripts/tree/master/Downloaders I am mostly confident that hydrus could be taught to download from yandex images, but actually learning how to do that takes some time. You might like to play with hydrus's 'simple downloader', maybe one of the default formulae in there can grab image links or something and get what you want. Or, if you are very nice to a hydrus user who knows how to make downloaders, you might be able to get them to make one for you. >>8489 There might be a way to bodge this, like if you grabbed the hash in the parser maybe and hydrus never noticed it was missing a direct file URL, but I think there are too many weird hurdles to overcome and it would just fail somewhere. It is a long term project of the program to have efficient hash->tag lookup maintenance, so I do plan to have official support for this some time in the future with a future iteration of the whole parsing and lookup and maintenance systems. For now, your best bet is the Client API. Grab whatever kind of hash and tag info you like in your own script, and then throw it at the Client API. https://hydrusnetwork.github.io/hydrus/client_api.html >>8490 This looks good to me! The clone has removed all the damaged data, which seems to include some tag count tables. This is good news, because all this data can be regenerated, and it even seems that I wrote some special repair code to fix it automatically. With luck, the worst damage here is the annoyance of waiting for things to fix themselves. Click ok, let it do its work, and have a browse around. There may be more warning popup windows like this. Other data may be missing (e.g. not a whole missing table, which is easy to spot, but a table now missing half its contents from the clone), but if it is all limited to client.caches.db, you are in luck, because all that can be regenerated. Let me know if you notice any whack counts or bad searches once you are working again and I can help you figure out which of the guys under database->regenerate you should run. (NOTE: do not run any of those unless you know you need them, some of them take ages).
>>8491 It seems I already had "all local files" on, but changing it back to just "my files" seems to have no effect. I tried "clear orphan file records" and it nearly instantly completed without finding any.
>>8493 >For now, your best bet is the Client API Managed to figure it out, thanks. I used gallery-dl to download the metadata for all the files, gathered the md5 and tags from the metadata, searched up the md5 in the API and got the sha256, then added the tags to the sha256.
Hi, I didn't use Hydrus (Linux version) for three months, and after update to the latest version I noticed the following: when you start a selection in file manager (e. g. press shift and repeatedly press → to select multiple files) the image preview is freezing at selection begin, but the tag list is reflecting your movements. Old behavior is that both preview and tag list were changing synchronously.
>>8475 >>8493 Okay, thanks for the response. When the development is finished, I assume there will be an announcement. I had considered the VNC option. I'm not sure who's developing the app, if it's you or someone else, but do you know if it will be like a remote control of hydrus on a host computer, if it'll be a kind of a port of existing hydrus, or if it'll have functionality of both options? I'm also curious as of an approximate timeframe as well.
>>8455 I got it work via URL like that thouhg Hydrus url import page: https://www.pixiv.net/ajax/user/YOURPIXIVID/illusts/bookmarks?lang=en&limit=48&offset=96&rest=show&tag= I didn't try to change the limit key (was afraid of ban), so whole process was page by page - increasing offset by 48 every input of URL
>>8505 update: Hydrus finally booted, thank god, however it's completely empty. All the files are still on my HDD I can check, hydrus just seems to have forgotten about them. I suspect it might have also forgotten about pretty much all other settings as well, such as my thumbnails and files drive location. (thumbnails on ssd, files on hdd, originally, as suggested)
>>8515 Would I be able to do a "restore from a database backup", select my old, now seemingly "unlinked"/"forgotten" db, and proceed?
The release will only be recommended for advanced users! Regular users please check back next week. I had a great week. I fixed several things, improved some quality of life, and added a new service to the database to make managing multiple local file services a bit easier. The release should be as normal tomorrow. >>8505 >>8515 >>8516 Damn, this is not good. Your options structure has, yeah, been damaged, which means that client.db was affected too. Did you get lots and lots more 'this module was missing some tables' warnings? If your client sees no files, then it sounds like your core file table was damaged as well. This sounds stupid, but please check file->open->database location to make sure the client is pointing at the right location. In the off-chance that somehow your db folder has been set to read-only due to drive damage, it might redirect to a different location and would appear to be a brand new database. EDIT: There is an odd thing here I can't explain--your options structure was destroyed, and presumably the database made a fresh one. If this is true, it should not have a database backup location stored. If you made a backup previously, I think hitting 'restore from a database backup' is the correct answer here. Since everything is very damaged, I would not do this in the client, but externally, and make sure you keep everything. Something like this: - Go to install_dir/db - Move the damaged client*.db files somewhere safe. - Go to your db_backup folder (this used to be something like install_dir/db/db_backup, but it could be somewhere else. Search your system for "client.master.db" if you aren't sure where) - Copy the client*.db files from the backup folder to your install db folder. - Try to boot Make sure you don't delete anything, and make sure your temporary folders are labelled so you don't lose track of anything. I am not sure what has happened. You seem to have had some really bad database damage, and this may ultimately need some more focused back and forth. Let me know how you get on, and if you like, please email me or DM me on discord and we can get into it more closely. You've been reading 'help my db is broke.txt', but I'll just reiterate--please make sure your SSD is healthy, in case there are ongoing read errors here or something.
>>8518 Alright lemme give just a little more context to the current state of things then. This is how my setup [b]used[/b] to be set up client.exe in (SSD) E:\Hydrus Network\ thumbnails in (SSD) E:\Hydrus Network\thumbnails\ files in (HDD) F:\Hydrus Network Files\files\ (from f00 to fff) after this whole fuckery happened, I manually checked and all files remained in their place and continue to be fully intact and viewable from the file explorer, and also able to be opened and viewed without a fuss. Coming home from work I checked and it seems my suspicions were right. All my settings were reset to default, including the default file locations so for example were I to save a picture from 8chan it would by default put it in: E:\Hydrus Network\db\client_files\ There are currently no files actually saved in this location. It's empty. To clarify I didn't "create a backup" before this, but since my previous files in (F:) still remain there completely fine and viewable I was wondering if I could simple instruct hydrus to "look here for pictures" basically. At this point I don't care about tags, watches, and all that stuff, I'm just glad my files are safe and I want to get hydrus back in shape where it's useable for me.
>>8520 PS.: It's as if hydrus had uninstalled then reinstalled itself. Quite bizarre...
>>8520 >>8521 Yeah, this is very odd. If you had not posted about the malformed errors and the problem loading the serialised options object, I would have guessed that your database files had been accidentally deleted. If the client boots with no 'client.db' file in the db directory, it assumes this is first start and creates a fresh one. That would give the symptoms of resetting your file locations back to install_dir/db/client_files. I am sorry to say I think your client.db probably was eviscerated in some way, almost certainly a very bad hard drive event, or something external--like a crazy anti-virus program, or it might be a cloud backup process--removed or broke the file. In any case, I am sad to say I think your best bet is to move everything in your 'db' folder to a safe location and start again. The current database is either damaged or strange and can't really be trusted going forward. Make a new database and import the files in F:\Hydrus Network Files\files\ in batches. You can't go 'just look here and get the files', unfortunately, but you can import them manually no problem. If there are things like inbox status you want to try to save from the old database, I can help with that, but it will require some time and complicated manual SQL to do. Let me know what you miss. This situation sucks, but if your files are safe, that's great. Once you are feeling better about your situation, please check out how to maintain a backup of your client: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
https://www.youtube.com/watch?v=ZUrcYKghr-Y windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v486/Hydrus.Network.486.-.Linux.-.Executable.tar.gz I had a great week working on a variety of smaller issues and some important database updates. The release this week is only recommended for advanced users. I make an important change, and I want to make sure the update works quickly and without problems before I roll it out to everyone. If you are not an advanced user, please check back in next week! The update will also take a few minutes this week. all my files So, I have made a new virtual service, 'all my files', which covers the union of all your local file services. This service is very similar to 'all local files', but it does not include trash or repository files. It provides a bunch of tools across the program for quick and precise searching of all the files that have value and are worth looking at. When you update, this new service will be created and populated. It will take a few minutes, longer if you have millions of files and tags. My 2.8-million-file ptr-syncing client took 32 minutes. There are progress updates on the splash window. Once you are booted, you will see 'all my files' in review services and the file domain selector if you have more than one local file domain. Feel free to play around with it--it will run a lot faster than previously going 'multiple locations' and unioning all your local file services. The code is working really well on my end, and I am not afraid of anything being damaged, but if something goes wrong, it may require some clever/slow regeneration to fix. The main things I would like to know are: 1) Did your update take significantly longer than ~100k files/minute? Did it get held up on anything? 2) After some use, have you noticed any file/tag miscounting with 'all my files'? As always, make a backup before you update. other highlights The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. The database now cleans up after itself more thoroughly. Some users have been having trouble with very large 'WAL' files, some getting to be multiple GB, and perhaps seeing bloated memory use along with it. A set of new maintenance routines now force write-flushing at regular intervals. In my testing, there is no lag related to this, but I will be interested to hear if anyone gets new commit hang-ups during very heavy work. If you have had a huge WAL, let me know if this helps! full list - This week's release is for advanced users only! I make a big change, and I want to make sure the update is fast and there are no unusual problems before rolling it out to all users. - all my files: - the client adds a new virtual file service this week, 'all my files', which is an umbrella covering all your local file domains. if you do not engage the multiple local file services system, you won't see it much, but if you do, you'll now have a convenient tool for saying 'all my stuff' without including trash and repository updates - it will take a minute or two to generate this new service on update. if you have a client with millions of files, it may take a while - 'all my files' now appears in the file domain selector button on your tag entry box if you have more than one local file domain. selecting this searches the union of all your local file domains with fast and precise count (as opposed to 'multiple locations' of the full union, which will have imprecise counts and be slower). it also does duplicate file work laser-fast (again, unlike 'multiple locations', which is often slow due to UNION complexity) - 'all my files' also appears in review and manage services, very similarly to 'all local files' - a heap of hacks I instituted when getting multiple local file services ready are now replaced with this clean 'yeah this file is valued and worth looking at' domain. for instance, downloader pages now view files in this way. - mr bones and the file history chart also use 'all my files', and are significantly faster to calculate. the chart also excludes repo update files and trash now - calls to delete or undelete on 'all my files' (this is mostly Client API and some 'default' situations) will be converted to a blanket 'force send to trash' and 'force undelete all deleted records' - the 'undelete files?' dialog is now a button selection dialog. it also now has an 'all the above' option when more than one local service may apply, which tells the client to undelete to all services the files have been deleted from - updated multiple local file services help to talk a little about the new domain - rearranged the sort in a couple of places where the different local file services appear. they should now be: local file domains, all my files, trash, repo updates, all local files - ADVANCED: the 'presentation import options' under 'file import options' now allows a full-fledged location context using the new multiple local file services system rather than the previous 'in your files(and trash too)' choice. it defaults to the new 'all my files' domain
- misc: - thanks to a user, the 'getting started with downloading' help has had a full pass. if you have had trouble with downloaders, particularly if you are unsure about what file import options are for, or what subscriptions are, please check it out! - the 'media viewers' shortcut set gets three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max' (issue #1141) - if a media type is set to do 'exact zooms', it will now not exceed the otherwise specified max zoom - the file sort widget will now preserve ascending/descending status on sort type changes (rather than resetting to default) if the asc/desc strings do not change. so, if you are on 'import time'/'oldest first', and switch to 'archive time', it will now stay on 'oldest' rather than resetting to 'newest' - the manage tag siblings dialog now tries to automatically break loops for you, just like it will automatically break A->B, A->C conflicts. this works on manual entry or mass import - the manage tag siblings dialog now shows the stated 'reason' for any pair change (e.g. "AUTO-PETITION TO BREAK LOOP") in the 'note' column - the 'short' animation scanbar--when your mouse is away--now keeps a short disabled volume button beside it. I found it very annoying how the scan nub would jump a few pixels left/right as this popped up and down, so now it is the same width big and small - right-clicking on files when in pages with 'multiple locations' file domains is now much much faster - the filename tagging dialog now starts with the 'tags for all' focused, and the 'press up/down on empty input' shortcuts are now plugged in, so pressing up/down will change service - I believe I may have completely eliminated the additional superlag that sometimes occurs when adding or deleting a service. it was a database maintenance routine getting carried away with other outstanding work - move/add actions in the new multiple local file system now operate asynchronously and politely, spreading their work time out when the client is busy, and for large jobs they will also make a cancellable progress popup - cleaned up how the autocomplete entry sends some of its signals to other parts of the program - did some misc help and code edits/refactoring, including brushing up the Windows install section with more advanced options - removed the 'hydrus zooms big bad' warning from the 'media' options page. hydrus zooms big good now! - . - some database stuff: - tl;dr: database cleans up after itself better now - some users have had trouble with database journal files (the 'wal' files in your db directory) on certain clients getting huge after lots of work, multiple GB, and causing the OS a headache if the journal is doing work through a computer sleep. these journals are 'supposed' to checkpoint and clean themselves up naturally, but I think a busy database chokes them. therefore, I have improved the hydrus maintenance this week: 1) the 'journal size limit' PRAGMA, which applies softly after every 30 seconds or so, is now 128MB down from 1GB. 2) databases in PERSIST (rare) mode will now specifically zero out their journal fifteen minutes. 3) databases in WAL mode (the default), in addition to regular PASSIVE checkpointing now every five minutes, will force an additional TRUNCATE checkpoint every fifteen. this should force a regular full flush and maybe help some other problems like gigantic memory bloat the same users sometimes saw. if you are a very advanced user and do active debug on the database while hydrus is using it, please note this new TRUNCATE command is aggressive and may block itself or you inconveniently. let me know how you get on! - moved the recent 'be careful of usb drives' section in 'installing' help to 'help my db is broke.txt'. it is very likely this problem was related to the above WAL stuff, and it was not just usb drives, I rewrote it as generalised help for anyone who gets 'delayed write failed' errors at the OS level - massively optimised several critical duplicate files filtering methods if the current location context has more than one file domain, and I think I cleared out the basic 'get duplicate info for this file' call of all slow calls in complex location contexts - the repair routine that regenerates mapping caches if any tables are missing on boot is now more reliable and covers the entirety of the mappings cache system using the new modules system. it also now regenerates just for the tag services with missing tables, not the whole cache - if multiple types of mapping cache tables are missing on boot, and multiple waves of regenerations covering different areas are planned, duplicate regenerations will now be skipped next week Beyond some more multiple local file services work--probably client api updates--next week is a 'medium size' job week. I want to plough some time into better en masse import/export tools for tags and other metadata. I'm not sure how far I will get, but I want a framework sketched out so I can start hanging things off it in future.
Can Hydrus have audio WavPack (.wv) files support, even only just for storing, not playback? That will be a good addition to the already available .flac and .tta.
down the line this will probably be obsolete, but before than it will help quite a bit. with duplicates, when they are pixel matches, is there a way to either set the lower file size one to be green and the bigger one to be red? its already this way with jpeg vs png ones, but same vs same just has both as blue, and with a pixel duplicates there would never be a reason to choose the larger file size. for me I want the duplicate deciding process to be as speedy as possible, at least with these exact duplicate ones, and I have been watching things while doing this, however and this may be my monitor, unless im staring straight at the numbers, they kind of blend, making 56890 all kind of look alike, requiring me to sit up and look at it straight on. I think if the lower number was green on exact dupes it would speed the process up significantly, at least until an auto discard for exact dupes (and hopefully this takes the smaller file size as the better pair) gets implemented and we no longer have to deal with exacts. I don't know if this would be simple to implement, but if it is, it would be much appreciated.
I'm trying to download a thread from archived.moe and arciveofsins.com but it keeps giving errors with a watcher and keeps failing with a simple downloader. it seems like manually clicking on the page somehow redirects to a different link then when hydrus does it.
>>8158 >In terms of metadata, hydrus keeps all other metadata it knows about the file. If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? Also, what does telling Hydrus to forget previously deleted files actually remove if it still keeps the files' hashes? I don't feel comfortable (or desperate) enough to use the method you gave, but I also don't want to go through the trouble of exporting all my files, deleting the database, reinstalling Hydrus, and then importing and tagging the files all over again.
My autocompleted tag list displays proper tag counts, but when I search them I get dramatically less images. I can still find these images in the database through system:* searches and they're still properly tagged. My tag siblings and parents aren't working for some tags either. But all the database integrity checks say everything is okay. What's my next step?
Still getting some errors in the duplicate filter, I think it has something to do when I'm choosing to delete images v485, win32, frozen IndexError list index out of range File "hydrus\client\gui\ClientGUIShortcuts.py", line 1223, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1163, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3598, in ProcessApplicationCommand command_processed = CanvasWithHovers.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2776, in ProcessApplicationCommand command_processed = CanvasWithDetails.ProcessApplicationCommand( self, command ) File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 1581, in ProcessApplicationCommand self._Delete() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 2928, in _Delete self._SkipPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3488, in _SkipPair self._ShowNextPair() File "hydrus\client\gui\canvas\ClientGUICanvas.py", line 3442, in _ShowNextPair while not pair_is_good( self._batch_of_pairs_to_process[ self._current_pair_index ] ):
>>8494 I have had a report from another user about a situation a bit similar to yours related to the file service that holds repository update files. I am going to investigate it this week, please check the changelog for 487. I can't promise anything, but I may discover a bug where some files aren't being cleanly removed from services at times and have a fix. >>8496 Yes, hit up options->gui pages and check the new preview-click focus options. Note that shift-click is a bit more clever now, too--if you go backwards, you can 'rewind' the selection. >>8499 Yeah, I like to highlight neat new apps in the release posts or changelogs. I do not make any of the apps, but I am thinking of integrating 'do stuff with this other client' tech into the client itself, so you'll be able to browse a rich central client with a dumb thin local client. Timeframe I can't promise. For me, it'll always be long. I'm expecting my 'big' jobs for the next 12-18 months to be a mix of server improvements, smart file relationships, and probably a downloader object overhaul. I'll keep working on Client API improvements in that time in my small work, and I know the App guys are still working, so I just expect the current betas to get better and better over time, a bit like Hydrus, with no real official launch. Check in again on the links in the Client API help page in 4-6 months, is probably a good strategy.
>>8530 Sure, just point me to some example files (or send me some) and I'll see if it is easy to recognise them. >>8545 Yes, I want to write some special rules that you can customise for pixel dupes. Some users always want the bigger file, some the smaller, so I'm planning to make the current weights you see in options->duplicates a bit richer, and probably add some '- unless they are pixel dupes, in with case use [ 123 ] [ ] do not care if pixel dupes' side options. >>8546 Can you paste any of the errors, so I can see more information? They should be in the 'note' column of the search/file log on the downloader page, and you can copy them with right-click menu. I don't know much about those sites, but if they have complicated redirects or login requirements, or Cloudflare rules, maybe to stop spiders, the situation may be more tricky than the simple downloader can handle. If it is a login situation (i.e. lots of cloudflare problems or 403/401 errors), then maybe Hydrus Companion's ability to copy your browser's login cookies to hydrus via the Client API may help. https://gitgud.io/prkc/hydrus-companion
>>8547 >If there is no URL data, (e.g. I imported it from my hard drive), and I remove the tags from the files before deletion, and then use the option to make Hydrus forget previously deleted files, would I "mostly" be OK? It depends on what 'OK' means, I think. If you want to remove the hash record, sure, you can delete it if you like, but you might give yourself an error in two years when some maintenance routine scans all your stuff for integrity or something. Renaming the hash to a random value would be better. Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. Telling hydrus to remove a deletion record only refers to the particular file domain where the file was deleted from. It might still be present in other places, and other services, like the PTR, may still have tags for it. It basically goes to the place in the database where it says 'this file was deleted from my files ten days ago' and removes that row. If you really really need this record removed, please don't rebuild your whole client. Make a backup (which means making a copy of your database), then copy/paste my routine into the sqlite terminal exactly, then try booting the client. If all your files are fucked, revert to the backup, but if everything seems good, then it all went correct. Having a backup means you can try something weird and not worry so much about it going wrong. More info here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#backing_up
>>8553 The nuclear way to fix this sort of problem, if it is a miscounting situation, is database->regenerate->tag storage mappings cache (all, deferred...). If the bad tag counts here are on the PTR, this operation could take several hours unfortunately. If the tags are just your 'my tags' or similar, it should only be a couple of minutes. Once done, you'll have to wait for some period for your siblings and parents to recalculate in idle time. But even if that fixes it, it does not explain why you got the miscount in the first place. I think my recommendation is see if you can find a miscounted tag which is on your 'my tags' and not on the PTR in any significant amount. A 'my favourites' kind of tag, if you have one. Then regen the storage cache for that service quickly and see if the count is fixed after a restart. If it is, it is worth putting the time into the PTR too. If it doesn't fix the count, let me know and we can drill more into what is actually wrong here. >>8555 Damn, thank you, I will look into this.
>>8565 This seems to have fixed it, thank you! However, it's left quite a few unknown tags. I guess those tags were broken, which was the problem in both my counts and parent/siblings. Is there any way to restore those "unknown tag" namespaced tags, or is it better to just try to replace them one by one?
(739.29 KB output.zip)

>>8563 Here is some samples of WavPack from the web: https://telparia.com/fileFormatSamples/audio/wavPack/ But just in case I attached short random laugh compressed with recent release of encoder on Linux. Format seems have magic number "wvpk" as stated on wikipedia or github repo: https://github.com/dbry/WavPack/blob/master/doc/WavPack5FileFormat.pdf
Will it be possible at some point to edit hydrus images without needing to import it as a brand new image? It's annoying opening images in an external editor, making the edit, saving the image, importing said image, transferring all the tags back onto it, and then deleting the old version when all I'm doing usually is cropping part of it.
I had an ok week. I didn't have time to get to the big things I wanted, but I cleared a variety of small bug fixes and quality of improvements. The release should be as normal tomorrow.
>>8555 Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work.
How long until duplicates are shown properly? Also, are transitive duplicates sorting (as in files which aren't possible duplicates but have duplicates in common) in the to do list?
https://www.youtube.com/watch?v=VKuGYKkH3oA windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v487/Hydrus.Network.487.-.Linux.-.Executable.tar.gz I had an ok week. I was a unexpectedly short on time, so I couldn't get everything I wanted done, but I cleared out some small work. highlights The big update last week, which I recommended only for advanced users, went well. There don't seem to be any obvious problems with the logic of the new search cache, so I now recommend it for everyone. You will be presented with a popup just before the update runs, giving you an estimate of how long it thinks it will take. Most users should take 5-10 minutes, but if you have millions of files, it will be longer. Just let it run and some things will run a bit faster and neater in the background. If you have played with 'multiple local file services', then check out the new 'all my files' domain you will see--this is basically an efficient umbrella of all your local file services. It works super fast for things like the duplicates system. I also put some time into the duplicate filter this week. The logic of the queue is improved again, so some rare errors when reaching the end of a batch should be fixed. I also integrated manual file deletes into the queue processing: now, when you manually delete a file, or both, the deletes will not happen until you commit--just like the other decisions you are making--and they are undoable if you select 'forget' or go back a pair. You also won't see a file you manually deleted again in a batch (it'll auto-skip if that file comes up again). Also, the duplicate filter now has a little 'send pair to page' button, which publishes the current pair to the duplicates page that made the filter, just in case you want to save them for some extra processing after you are done filtering. You can do this with multiple pairs and they'll just stack up in the page. A couple other neat things happened in last week's advanced-user-only release, which I will repeat here: The 'media viewers' shortcut set has three new zoom actions: 'switch between 100% and max', 'switch between canvas and max', and 'zoom to max'. When you enter pairs in the tag sibling dialog, it shouldn't complain about loops anymore, but instead automatically break them, just like how it will auto-petition an A->B, A->C conflict. full list - misc: - updated the duplicate filter 'show next pair' logic again, mostly simplification and merging of decision making. it _should_ be even more resistant to weird problems at the end of batches, particularly if you have deleted files manually - a new button on the duplicate filter right hover window now appends the current pair to the parent duplicate media page (for if you want to do more processing to them later) - if you manually delete a file in the duplicate filter, if that file appears again in the current batch of pairs, those will be auto-skipped - if you manually delete a file in the duplicate filter, the actual delete is now deferred to when you commit the batch! it also undoes if you go back! - fixed a bug when editing the external program launch paths in the options - fixed an annoying delay-and-error-popup when clearing the separator field when editing a String Splitter. now the field just turns red and vetoes an OK with a nicer error text - also improved how string splitters report actual split errors - if you are in advanced mode, the _review services_ panels now have an 'id' button that lets you fetch the database service id - wrote a new database maintenance routine under _database->check and repair->resync tag mappings cache files_, which is a lightweight way of fixing ghost files or situations where files with a tag are neither counted nor appear in file results. this fixes these problems in a couple minutes, so for this it is much better than a full regen of the cache - . - cleanup and other boring stuff: - the archive/delete filter now says which file domain it will be deleting from - if an archive/delete filter is launched on a 'multiple locations' file domain, it is now careful to only make delete records for the deleted files for the file services each one is actually in - renamed the 'default local file search location' option to 'fallback' and updated its tooltip a bit. this was really a hacky thing I needed to fill some gaps while rewriting from 'my files' to multiple local file services. the whole thing needs more attention to become more useful. I also fixed an issue where it could become invalid 'nothing' if you deleted a file service it was referring to (issue #1155) - I think I fixed a rare 'did not find info for that file' style problem when highlighting some watchers/downloaders - I think I have silenced some unhelpful BeautifulSoup (html parser) warnings that were spamming to the log in some situations - updated last week's big update to work with TRUNCATE journalling mode. I will be doing this for other big updates going forwards, since multi-GB WAL transactions cause problems for some users - last week's update also gives a time estimate in its pre-popup, based on 60k files per minute - removed some old database cache data that wasn't cleared in a previous update - a variety of misc UI text fixes and cleanup next week I regret I did not have time for a larger import/export framework. It will have to wait. I have one more week of work before my vacation week, so I will try to just do some small cleanup and polishing so the release is 'clean' before my break.
>>8563 nice, hopefully the rules come soonish, would make going through them a bit easier, definitely want to check out some things in 487 as they are things I made work arounds for like pushing the images to a page, I currently have a rateing that does something similar when i want to check the file a bit closer, be it a comic page I want to reverse search or something I want to see where it came from, this may be a better option.
>switch to arch linux from windows >get hydrus running >use retarded samba share on nas for the media folder >permission error from the subscription downloader >can view and search my images fine otherwise, in both hydrus and file manager Any idea which permissions would be best to change? I'm retarded when it comes to fstab and perms, but I know not to just run everything as root. I just can't figure out if its something like the executable's permissions/owner, the files permissions/owner, or something retarded in how I mount it. Pictured are the error, fstab entry, the hydrus client's permissions, and what the permissions for everything in the samba share are. The credentials variable in fstab is a file that only root can read, for slight obfuscation of credentials according to the internet. The rest to the right was stuff I added to allow myself to manipulate files in the samba share, again just pulled from random support threads.
>>8618 >Happens to me when I choose to delete one or both pictures of the last pair presented. The assumed to be deleted picture stays on screen and the window needs to be closed. Hydrus then spits out errors like "IndexError - list index our of range" or "DataMissing". I believe cloning the database with the sqlite program deletes the error until one chooses to delete the last pair of duplicates again. Thanks for the hard work. Appears fixed for me with v487 - Thanks.
Perhaps, another bug?: >file>options>files and trash>Remove files from view when they are sent to trash. Checking/Unchecking has the desired result with watchers and regular files but does not seem work anymore with newly downloaded files published to their respective pages. Here, the files are merely marked with the trash icon but not removed from view, as it had been the case (for me) until version 484.
>>8627 It seems like I can manipulate files within the samba drive but it spits out an error when moving from the OS drive to there. So I guess it's some kind of samba caching problem.
I have noticed some odd non-responsiveness with the program. It is hosted on an SSD. While in full-screen preview browsing through files to archive or delete, sometimes the program will stop responding for approximately 10 seconds when browsing to the next file (usually a GIF but not always). The next file isn't large or long or anything. I'm not sure what's causing this issue. Is it just the program generating a new list of thumbnails?
>>8641 I also wanted to note this issue is not unique to this most recent update. It has been there for a while.
>>8641 >>8642 I guess I should also reiterate that the program AND the database are both hosted on the same drive (default db location)
well this is a first, the png on a pixel for pixel against a jpeg was smaller... i'm guessing that jpeg is hiding something.
>>8566 Ah shit, if you have 'unknown tag:abcdef...' garbage, this is strong evidence that you have actually had database damage (to client.master.db), most likely through a hard drive blip. This probably also explains why your searches were jank--your 'client.caches.db' was probably damaged as well. I don't think there is a way to figure out which original tags those 'unknown tag:blah' actually referred to, at least no simple easy one. Basically when the client tried to rebuild your cache, it found gaps in the definition table and filled them with random but valid data. Your next step is to read the 'help my db is broke.txt' document in install_dir/db directory. This has background reading about the nature of hard drive problems and things you should do to check your drive is ok and your database files are ok. If you have a recent backup, hold on to it! If you have a backup, we may be able to recover your bad tags. But before then, make sure everything is safe now and there aren't more problems. Let me know how you get on! >>8568 Thank you! I'll see what I can do. >>8608 I hope that as the duplicate system gets more tech, this will be more possible. Hydrus works on exact file content, so it will never natively support editing, but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech, including for other conversions like jpegxl, waifu2x, or video re-encoding. For now, though, hydrus is really for 'finished' files.
>>8618 >>8630 Great, thanks for letting me know. >>8619 I expect to do a big push on duplicates in Q4 this year or Q1 2023. I really want to have better presentation, basically an analogue to how danbooru shows 'hey, this file has a couple of related files here (quicklink) (quicklink)'. Estimating timeframes is always a nightmare, so I'll not do it, but I would like this, and duplicates are a popular feature for the next 'big job'. At the moment, there is a decent amount of transitive logic the duplicates system. If A-dup-B, and B-dup-C, then A-dup-C is assumed. Basically duplicates in the hydrus database are really a single blob of n files with a single 'best' king, so when you say 'this is better' you are actually merging two blobs and choosing the new king. I have some charts at the bottom of this document if you want to dive into the logic some more. https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced But to really get a human feel for this, I agree, we need more UI to show duplicate relationships. It is still super technical, opaque, and not fun to use. >>8627 >>8636 I'm afraid I am no expert on this stuff. The 'utime' bit in that first traceback is hydrus trying to copy the original file's modified time from a file in your temp directory to the freshly imported file in the hydrus file system, so if the samba share has special requirements for that sort of metadata modification, that's your best bet I think.
>>8631 Thank you, I will check this! The specific rules that govern when it is and isn't correct to apply this option are frustratingly complicated, and adding multiple local file services made it moreso. I'll have a play and see what I can figure out. >>8641 >>8642 >>8643 Thank you for this report. Sometimes this is my fault, sometimes it is something else. Since your files are on a fast SSD, we can rule out some weirder things like NAS directory scan times, but I do know that Windows anti-virus got a lot more aggressive in the past couple of years, and pretty much any file you access gets a scan before it is loaded. This can cause a ~50-150ms delay on some video files in hydrus, which are not pre-cached yet. Maybe maybe if anti-virus was working hard, and search indexer was also going bananas as it sometimes does, and your client was working hard doing imports and things, all the locks would add up and it would halt. 10 seconds sounds like a bigger problem though. Can you try turning off the 'normal time' sync under tags->sibling/parent sync->sync during normal time? Does that free you up a bit? You can check the 'review' panel on that same sub-menu to see if your client has a lot of catch-up work to do there. But that's probably only applicable if you sync with the PTR. Do you have a lot of imports, btw? Like do you have 25+ active file import queues, be they downloaders or hard drive imports or whatever, running at once? It could just be the file system is overwhelmed with new writes and can't serve you the read request for the gif. Otherwise, please check help->debug->profiling->profile mode. There's a 'what's this?' on the same menu to show you how to use it. Run that for a bit and see if you can capture a freeze, then pastebin or email me the profile log and I'll see if anything helpful was recorded. >>8644 Wow, yeah, that's the first time I have seen that too. I assume this image is just an anime babe or something and nothing like a crazy geometric pattern? If you are ok sharing, I'd be interested to either have that jpeg or get a link to a booru it is on, just so I can check it out myself. No worries if you don't want to share. There's probably some EXIF browsing programs out there that might be able to expose it. Another trick is just to export, rename to .zip, and see if 7zip will open it. Some hidden archives are literally just appended to the end of the image file data.
>>8643 >>8647 There is no downloading or synching being done. Client is basically running stock, with no tags or anything (not even allowed to access the internet yet). Think it might be AV? Running Kaspersky on Low (uses very little resources for automated scanning).
>>8648 >>8647 Also, no active running imports. Just an open import window with about 60k files for me to sift through.
>>8649 >>8647 I tried it with an exclusion for the entire Hydrus folder for automated scanning but the problem persists so I don't think its AV related.
Would it be possible to add a sort of sanity check to modified times to prevent obviously wrong ones from being displayed? I've noticed a few files downloaded from certain sites since modified times were added to Hydrus show a modified time of over 52 years, which makes me think that files from sites which don't supply a time are given a 0 epoch second timestamp. In this case I think it would be better to show a string like "Unknown modification time" or none at all.
>>8652 Also, if I try to download the same file from a site that does have modified times, the URL of the new site is added but the modified time stays the incorrect 52 years. Maybe there could be an option to replace modified times for this query/always if new one found/only if none is already known (or set to 1970). I also couldn't find a way to manually change modified time, but maybe I didn't look hard enough.
I've gotten my instance of Hydrus into a state where the "parent/sibling sync" process is stuck. I have several parent/child pairs that were working fine, and running on ~v450, but recently I added a few more and after applying realized some were the wrong way around parent/child-wise. I went back in and edited the parent tag configs to delete the bad ones and re-add them with the tags the right way around. But it seems my instance has stopped processing the tag updates. tags > parent/sibling sync > review parent/sibling maintenance showed it was aware there was more work to do, but stayed stuck at the same percent done for over 12 hours, even when I clicked the "work hard now!" button and had it set to sync "all the time" (not just during idle time). I used the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to go back to zero percent done, but now has not progressed passed zero percent done for over 24 hours. I'm not sure the "maintenance" is even doing anything, as the Hydrus client process in task manager isn't using much CPU/RAM/Disk process at all. I upgraded to v487, but no change in symptoms. This instance has 85 parent configs set, 5,000 files in it, has no subscriptions/services/downloaders, and is only using local tags, running on Windows 10. The client log seems to have no errors that seem related to a parent/child sync issue, but one error does pop up on each startup: Traceback (most recent call last): File "hydrus\core\HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus\client\metadata\ClientTagsHandling.py", line 514, in MainLoop self._controller.WaitUntilViewFree() File "hydrus\client\ClientController.py", line 2279, in WaitUntilViewFree self.WaitUntilThumbnailsFree() File "hydrus\client\ClientController.py", line 2284, in WaitUntilThumbnailsFree self._caches[ 'thumbnail' ].WaitUntilFree() KeyError: 'thumbnail' File "threading.py", line 890, in _bootstrap File "threading.py", line 932, in _bootstrap_inner File "hydrus\core\HydrusThreading.py", line 416, in run HydrusData.ShowException( e ) File "hydrus\core\HydrusData.py", line 1215, in PrintException PrintExceptionTuple( etype, value, tb, do_wait = do_wait ) File "hydrus\core\HydrusData.py", line 1243, in PrintExceptionTuple stack_list = traceback.format_stack()
>>8647 I would send it to ya but I dumped the trash before I saw your response, so far I have seen a few of these, if I find another ill send it to ya.
>>8656 Update on this issue: I tried exporting all my parent tags, then deleting all the parent tag configurations and using the database > regenerate > tag storage mapping cache (all), which caused the "maintenance" window to indicate there's no work to do. I then added back in one parent tag from my original set (that only applied to 5 files in the repository) and the "maintenance" window says there's now one parent to sync, but isn't actually processing that one parent.
>>8648 >>8649 >>8650 Hmm, if you have a pretty barebones client, no tags and no clever options, then I am less confident what might be doing this. I've seen some weird SSD driver situations cause superlag. I recommend you run the profile so we can learn more. >>8652 >>8655 Thanks, can you point me to some example URLs for these? I do have a sanity check that is supposed to catch 1970-01-01, but it sounds like it is failing here. The good news is I store a separate modified time for every site you download from, so correcting this retroactively should be doable and not too destructive. I want to add more UI to show the different stored modified times and let you edit them individually in future. At the moment you just get an aggregated min( all_modified_times ) value.
>>8656 >>8662 Damn, this is not good. I'm sorry for the trouble and annoyance. Have you seen very slow boots during this? That thumbnail cache is instantiated during an early stage of boot, so it looks like the sibling/parent sync manager is going bananas as soon as it starts. I have fixed the bug, I think, for tomorrow's release. That may help your other issue, which is the refusal to finish outstanding work, but we'll see. Give tomorrow's release a go, and if it gets to a '95% done' mode again and won't do the last work, please try database->regenerate->tag parents lookup cache. While the 'storage mappings cache' reset will cause the siblings and parents to sync again, the 'lookup' regen actually does the mass structure that holds all the current relationships. It sounds like I have a logical bug there when you switch certain parents around. You don't have to say the exact tags if you don't want, but can you describe the exact structure of the revisions you made here? Was it simply flipped parent-child relationships, so you had 'evangelion->ayanami rei', and it should have been 'ayanami rei->evangelion'? Were there any siblings involved with the tags, and did the parent tags that were edited have any other parent relationships? I'm wondering if there is some weird cousin loop I am not detecting here, or perhaps detecting but not recognising as creating outstanding sync work. Whatever the case, let me know how you get on with this!
I had a good week. I did some simple work to make a clean release before my vacation. The release should be as normal tomorrow.
>>8665 Yes, I did have a few very slow startups: a few times it took like two hours for the UI to show, though I could see the process was indeed started in task manager. Thanks; I'll try tomorrow's release and see if that helps anything. Parent-tag-wise, the process I think I was doing right before it failed was I had a bunch of things tagged with something generic, which had one level of namespacing (e.g. "location:outdoor"), and I decided to make a few more-specific tags (e.g. "location:forest", "location:driving", and "location:beach"; all of which should also get "location:outdoor" as a "parent"). But I first created the parent relationship the wrong way and didn't notice it (so everything that was "outdoor" would now get three additional tags added to it). I saved the parent config and started manually re-tagging (e.g. remove "outdoor" and add "beach" for those that were in that subgroup), and after doing a few I noticed the F3 tagging window wasn't showing the "parent" tag yet (wasn't showing "outdoor" nested under "beach"), and so I went back to the tag manager and realized they were wrong, so deleted the relationship and re-added them the right way and continued re-tagging. After a while I noticed it still hadn't synced, and realized it didn't seem to be progressing any more, and started triaging to see if it was a bug. None of them had siblings defined.
>>8664 >Thanks, can you point me to some example URLs for these? It looks like this is only affecting permanent booru. I'm using pic related posted in one of these threads. Here's a SFW example URL: http://owmvhpxyisu6fgd7r2fcswgavs7jly4znldaey33utadwmgbbp4pysad.onion/post/3742726/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa It may be of note that the "direct file URL" is from IPFS, and the following onion gateway URL is added to the file's URLs as well: http://xbzszf4a4z46wjac7pgbheizjgvwaf3aydtjxg7vsn3onhlot6sppfad.onion/ipfs/bafybeielnomitbb5mgnnqkqvtejoarcdr4h7nsumuegabnkcmibyeqqppa The same file is available here with a correct modification time (2022-02-27): https://e621.net/posts/3197238 The modified time in the client shows 52 years 5 months, which is in January 1970. Not sure if there's an easy way to see the exact time.
>>8645 >but I hope we'll have smooth and effective 'copy all metadata from this file to this file' tech Couldn't you just make a temporary "import these files and use _ as _ to find alternates, then do _ if _" for now? Like "import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller"? I mean it sounds like too much when you write it out like that, but the underlying logic should be pretty simple.
https://www.youtube.com/watch?v=AQOfIENN2tk windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v488d/Hydrus.Network.488d.-.Linux.-.Executable.tar.gz I had a good simple week making a clean release before my vacation. Everything is misc this week, nothing earth-shattering, just a bunch of cleanup and little stuff. If you have any wavpack files, try importing them! full list - the client now supports 'wavpack' files. these are basically a kind of compressed wav. mpv seems to play them fine too! - added a new file maintenance action, 'if file is missing, note it in log', which records the metadata about missing files to the database directory but makes no other action - the 'file is missing/incorrect' file maintenance jobs now also export the files' tags to the database directory, to further help identify them - simplified the logic behind the 'remove files if they are trashed' option. it should fire off more reliably now, even if you have a weird multiple-domain location for the current page, and still not fire if you are actually looking at the trash - if you paste an URL into the normal 'urls' downloader page, and it already has that URL and the URL has status 'failed', that existing URL will now be tried again. let's see how this works IRL, maybe it needs an option, maybe this feels natural when it comes up - the default bandwidth rules are boosted. the client is more efficient these days and doesn't need so many forced breaks on big import lists, and the internet has generally moved on. thanks to the users who helped talk out what the new limits should aim at. if you are an existing user, you can change your current defaults under _network->data->review bandwidth usage and edit rules_--there's even a button to revert your defaults 'back' to these new rules - now like all its neighbours, the cog icon on the duplicate right-side hover no longer annoyingly steals keyboard focus on a click. - did some code and logic cleanup around 'delete files', particularly to improve repository update deletes now we have multiple local file services, and in planning for future maintenance in this area - all the 'yes yes no' dialogs--the ones with multiple yes options--are moved to the newer panel system and will render their size and layout a bit more uniformly - may have fixed an issue with a very slow to boot client trying to politely wait on the thumbnail cache before it instantiates - misc UI text rewording and layout flag fixes - fixed some jank formatting on database migration help next week I am now off for a week. I think I need it! I'm going to play a ton of vidya, shitpost the big streams that are happening, fit some Wagner in, and get on top of outstanding IRL stuff. I'll be back to catch up my messages on Saturday the 18th. Thanks everyone!
trying to use Hydrus for the first time; is there a way to add subscription for videos specifically? So that it leaves out photos?
(480.53 KB 640x360 shitposting.gif)

>>8675 Have a nice vacation OP and watch out for fucking normies.
id:6549088 from gelbooru. (nsfw) with download dec. bomb deactivated. When downloading this specific picture, before it finishes downloading, it makes the program jump to 3 gb of ram until i close it. Is opens normally with browser, but spikes to 3 gb on hydrus. and since i only have 4 gb it makes the pc freeze. Just wanted to report on that. Also, no native enflish speaker here.
>>8679 forgot, using version 474
>>8668 Reporting in that v488 seems to have fixed both these bugs. There's no longer the thumbnail exception being logged, the startup time to get to a UI window is quicker, and the parent-sync status un-stuck itself. Hooray!
>>8645 This is about what I figured. I pulled the database from a dying hard drive a few months ago. Every integrity scan between now and then ran clean, but I had a suspicion something had gotten fucked up somewhere along the line. Since it's been a minute, any backups are either also corrupted, or too old to be useful. Luckily, re-constructing them hasn't been too painful. I made an "unknown tag:*anything*" search page, then right-click->search individual tags to see what's in them. Most have enough files in to give context to what it used to be, so I'll just replace it. It's been a good excuse to go through old files, clean up inconsistent tags, set new and better parent/sibling relationships, etc, so it's actualy been quite pleasing to my autisms. I had 80k files in with an unknown tag back when I started cleaning up, and now I'm down to just under 40k. I'm sure I've lost some artist/title tags from images with deleted sources, or old filenames, but all in all, it could be much worse.
Thanks man! Have a good vacation!
>>8676 if you're just subscribing to a booru, they will generally have a "video" tag. you can add "video" to the tag search.
>>8703 nope, not a booru. So there isn't a way to filter that. awh.
Is there any way to get Hydrus to automatically tag images with the tags present in the metadata? Specifically the tags metadata field, why whole collection was downloaded using Grabber.
>>8710 my*
>>8709 What website is it? You might be able to add to/alter the parser to spit out the file type by reading the json or file ending, then use a whitelist to only get certain file endings (i.e. videos)
I've been using hydrus for a while now and is in the process of importing all my files. Is there any downside to checking the "add filename? [namespace]" button while importing? Think i got over 300k images so it would create a lot of unique tags if that would be a problem.
About how long do you estimate it might take before hydrus will be able to support any files. I specifically need plaintext files and html files (odd, I know) if that makes a difference. The main thing is just that it'd be nice for me to have all my files together in hydrus instead of needing to keep my html and (especially) my text files separate from the pics and vids. Also. I'm curious. Why can't hydrus simply "support" all filetypes, by just having an "open externally" button for files that it doesn't have a viewer for? It already does that for things like flash files, afterall.
>>8627 >>8636 >>8646 It seems to be working now, not sure what changed but somehow arch doesn't always mount the samba directory anymore and needs a manual command on boot now, which it didnt before. Maybe it was some hiccup, maybe some package I happened to install as I installed more crap, maybe it was a samba bug that got updated.
Is there a way to reset the file history graph, under Help?
>>8668 >>8681 Great, thanks for letting me know! >>8671 Thank you. The modified date for that direct file was this: Last-Modified: Thu, 01 Jan 1970 00:00:01 GMT I thought my 'this is a stupid date m8' check would catch this, but obviously not, so I will check it! Sorry for the trouble. I'll have ways to inspect and fix these numbers better in future. >>8674 I'm sorry to say I don't understand this: >"import these files and use the filename as the original file hash, then set imported as better and delete the other if imported is smaller" But if you mean broadly that you want some better metadata algebra for mass actions, I do hope to have more of this in future. In terms of copying metadata from one thing to another, I just need to clean up and unify and update the code. It is all a hellish mess from my original write of the duplicates system years ago, and it needs work to both function better and be easier to use
>>8676 >>8703 >>8709 >>8716 In the nearish future, I will add a filetype filter to 'file import options', just like Import Folders have, so you'll be able to do this. Sorry for the trouble here, this will be better in a bit! >>8679 >>8680 I'm sorry, are you sure you have the right id there? gif of the frog girl from boku no hero academia? I don't have any trouble importing or viewing this file, and by it looks it doesn't seem too bloated, although it is a 30MB gif, so I think your memory spike was something else that happened at the same time as (and probably blocked) the import. Normally, decompression bombs are png files, stuff like 12,000x18,000 patreon rewards and similar. I have had several reports of users with gigantic memory spikes recently, particularly related to looking at images in the media viewer. I am investigating this. Can you try importing/opening that file again in your client and let me know if the memory spike is repeatable? If not, please let me know if you still get memory spikes at other times, and more broadly, if future updates help the situation. Actually, now I think of it, if you were on 474, I may have fixed your gigantic memory issue in a recent update. I did some work on more cleanly flushing some database journal data, which was causing memory bloat a bit like you saw here, so please update and then let me know if you still get the problem. >>8688 Good luck!
>>8710 Not yet. I don't inspect EXIF much yet, but I expect some sort of retroactive parser in future. Or I wouldn't be surprised if a user figures out a Client API tool to do this. Unless you mean NTFS tags, in which case I am even less expert. I know there are some tools that can convert NTFS tags into xml files, and I know a user once did that and then munged those files into .txt files for tag import, but I've never done of that stuff myself. >>8721 If you do this, make a new tag service for your filename tags under services->manage services->add->local tag service. Call it 'filenames' or something. The downside is these tags are messy. 300k tags won't add much lag, maybe 0.5-2% slower file load kind of thing. But they will get in the way, and most users find they don't actually want them all that often. Putting them in another service puts them in a little box on their own where it is easier to hide, compartmentalise, and potentially delete them in future without affecting your 'real' search tags. >>8722 Not sure. It is number 6 on the 'big stuff' list here: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc 'Support more filetypes / arbitrary file import ', so I see it happening, and very likely within the next three years. I also want to store my .txt and .html files. I have thousands of fanfics from a sordid past life of mine that I want to categorise. The problem I need to overcome is that hydrus is currently predicated on the ability to infer a filetype based on file content alone. The reasons for this are some bullshit technical stuff mostly related to maintenance and weird downloads, but it is currently needed. If you toss a file called 'file.file' at hydrus, it needs to be able to figure out if it is a jpeg or mp4 just looking at its insides. Most media files have rigid formats, literally a few bytes at like 'offset 8 bytes, WEBP', that make it easy to recognise them very quickly. Text and HTML have very dynamic content, so figuring out what they are is more tricky. Before I allow all files, I may be able to straight up support text and html, but there will still be problems. HTML is doable since you basically run it through a parser and see if it raises an error, but then you have to determine if it was HTML or XML. I expect to start work on this soon since some formats (SVG and some other open-source image-editing formats) are just XML, so I'll start recognising the broad category of XML and then try recognising keywords or these little XML 'this is what I am' tags, I think they are called DTD or something, and we may just fall into HTML support by happy accident. Raw .txt is much more difficult. A one-byte text file with 'a' is a text as much as a book in japanese unicode is. I probably can't recognise that versus any other arbitrary format, although I can probably get a high confidence guess. Supporting arbitrary files will require some import and maintenance rejigging. I'll have to no longer know that I can always figure out the mime of a file, and I'll have to pass the mime along from the original file extension or whatever. ALSO there are secondary issues like, at the moment, if the hydrus downloader runs into an HTML file when it expected a jpeg (e.g. some kind of fucked up 404 message that gave 200 instead of 404, which happens sometimes), I raise an 'ignored' error and say 'I think this downloader needs to be taught how to parse this document'. But when we can support HTML, what do I do then? Do I import the HTML error page as a file? I'll have to do something to the import workflow in general to say when text/html is ok and not ok. I'm leaning towards allowing text files on hard drive import, and then disallowing it on downloader import unless the URL Class specifically specifies it, but how the hell I make that user-friendly I'm not sure yet. Anyway, sorry, I went on a bit there, but that's the basic background. It will come, but it will be a big job, so I need to clear out some other things first. I'm basically done with multiple local file services now, so I'm moving on to some server updates and janny workflow improvements for my next big job. We'll see if that takes me all the rest of this year, but I hope I can clear it out faster, and then move on to the next thing.
>>8723 Great, let me know how things go in future! >>8725 What part would you like to 'reset'? All the data it presents is built on real-world stuff in your client, like actual import and archive times. Do you want to change your import times, or maybe clear out your deleted file record?
I had a good week. I did a mix of cleanup and improvements to UI and an important bug fix for users who have had trouble syncing to the PTR. The release should be as normal tomorrow.
when trying to do a file relationship search, is there a way to search for same quality duplicates. I don't see any way to do that, and every time I look at the relationships of a file manually, it's always a better/worse pair. Does Hydrus just randomly assign one of the files as being better when you say that they're the same quality?
https://www.youtube.com/watch?v=6rboksqjPy4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Linux.-.Executable.tar.gz I had a good week getting back into the swing of things. I fixed some important bugs and improved some UI. highlights All the downloader pages--gallery, watcher, urls, and simple--have a revamped status system. All the text that shows how file or gallery downloads are going is now generated in a better way, with more error states (e.g. it will tell you when your gallery stopped because it hit the file limit, or when one of the emergency pause states under the network menu has kicked in), and logic in edge cases is improved. Everything is unified now, so the texts are the same across all pages. Also, if a gallery query or watched thread is 'pending', its text now reports that it is waiting for a work slot, rather than staying blank. There _shouldn't_ be any situations now where a downloader is unpaused with work to do but has blank status. If you use the multiple local file services system, the archive/delete filter now presents more options when you are done. If the files are in more than one local file service, you can choose where you delete them from, including all applicable. This was confusing and opaque before, so I hope this makes it more clear what is happening and gives you more choice. I _believe_ I have fixed an important bug some users were having with PTR processing. There was an annoying issue about a 'definitions' file being seen as a 'content' file, or vice versa, that the automatic maintenance could not fix. I finally managed to reproduce the issue and fixed it. I schedule a fix in the update this week, so if you have been hit by this, please wait for one more round of file maintenance 'metadata' scans, and then unpause the PTR one more time. Essentially, I think I fixed the automatic maintenance. Let me know how you get on! full list - downloader pages: - greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner - the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending! - when you pause mid-job, the 'pausing - status' text is generated is a little neater too - with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI - any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds) - the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however - a bunch of misc downloader page cleanup - . - archive/delete: - the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible - fixed archive/delete commit for users with the 'archived file delete lock' turned on - . - misc: - fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed - the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately - optimised the master tag update routine when you petition tags - the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record - thanks to a user, the 'getting started with files' help has had a pass - I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear - . - important repository processing fixes: - I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues - I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising - also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed - there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this next week I ended up doing more cleanup this week than I expected, but I'm happy to have the downloader pages reporting better. They were a real knot before. I want to spend a little admin time next week, triaging final multiple local file services work and planning future server improvements for when that is done, and then I think I'd like to focus on more small jobs, including some github issues.
>>8743 Yes, 'same quality' actually chooses the current file to be the better, just as if you clicked 'this is better', but with a different set of merge options. The first version of the duplicate system supported multiple true 'these are the same' relationships, but it was incredibly complicated to maintain and didn't lend itself to real world workflows, so in the end I reinvented the system to have a single 'king' that stands atop a blob of duplicates. I have some diagrams here: https://hydrusnetwork.github.io/hydrus/duplicates.html#duplicates_advanced I don't really like having the 'this is the same' ending up being a soft 'this is better', but I think it is an ok compromise for what we actually want, which is broadly to figure out the best of a group of files. If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. I may revisit this topic in a future iteration of duplicates, but I'm not sure what I really want beyond much better relationship visibility, so you can see how files are related to each other and navigate those relationships quickly. Can you say more why you wanted to see the same quality duplicate in this situation? Hearing that user story can help me plan workflows in future.
(426.25 KB 958x538 64b0.png)

(8.03 KB 503x125 ClipboardImage.png)

>>8151 What do I do for this? I'm just tyring to have my folder of 9,215 images tagged.
What installer does Hydrus use? I'm trying to set up an easy updating script with Chocolatey (since whoever maintains the winget repo is retarded).
>>8755 Figured it out, Github artifacts shows InnoSetup. Too bad Chocolatey's docs are half fucking fake and they don't do shit unless you give them money. This command might work, but choco's --install-arguments command doesn't work like the fuckwads claim it does. choco upgrade hydrus-network --ia='"/DIR=C:\x\Hydrus Network"'
(11.25 KB 644x68 ClipboardImage.png)

>>8756 No, actually, that command doesn't work, because the people behind chocolatey are lying fucking hoebags. Seeing this horseshit, after THEY THEMSELVES purposfully obfuscated this bullshit is FUCKING INFURIATING.
>>8745 The main thing I wanted to do is compare the number of files that were marked as a lower-quality duplicates across files from different url domains with files that aren't lower-quality duplicates (either kings, or alts, or no relationships) to see which domains tend to give me the highest ratio of files that end up being deleted later as bad-dupes, and which ones give me the lowest, so I know which ones I should be more adamant about downloading from, and which ones I should be more hesitant about. This doesn't really work that well if same-quality duplicates can also be considered "bad dupes" by hydrus, because that means I'm getting a bunch of files in the search that shouldn't be there, since they're not actually worse duplicates, but same-quality duplicates that hydrus just treats as worse arbitrarily. Basically, I was trying to create a ranking of sites that tend to give me the highest percentage of low-quality dupes and ones that give me the lowest. I can't do that if the information that hydrus has about file relationship is inaccurate though. It's also a bit confusing when I manually look at a file's relationships, because I always delete worse duplicates, but then I saw many files that are considered worse duplicates and I thought to myself "did I forget to delete it that time". Now this makes sense, but it still feels wrong to me somehow.
(2.77 KB 306x117 windozeerror.png)

>>8757 >2022 and still using windoze Time to dump the enemy' backdoor.
>>8753 The good catch-all solution here is to hit up services->review services and click 'refresh account' on the repository page. That forces all current errors to clear out and tries to do a basic network resync immediately. Assuming your internet connection and the server are ok again, it'll fix itself and you can upload again. >>8755 >>8756 >>8757 Yeah, Inno. There's some /silent or something commands I know you can give the installer to do it quietly, and in fact that's one reason the installer now defaults to not checking the 'open client' box on the last page, so some automatic installer a guy was making can work in the background. I'm afraid I am no expert in it though. If I can help you here, let me know what I can do. >>8758 Ah, yeah, sorry--there's no real detailed log kept or data structure made of your precise decisions. If you do always delete worse duplicates though, then I think you can get an analogue for this data you want. Any time you have a duplicate that is still in 'my files', you know that was set as 'same quality', since it wasn't deleted. Any time a duplicate is deleted, you know you set it as 'worse'. If you did something like: 'sort by modified time' (maybe a creator tag to reduce the number of results) system:file relationships: > 0 dupe relationships then you switch between 'my files' and 'all known files' (you need help->advanced mode on to see this), you'll see the local 'worse' (you set same quality) vs also the non-local worse (you set worse-and-delete), and see the difference. In future, btw, I'd like to have thumbnails know more about their duplicates so we can finally have 'sort files by duplicate status' and group them together a bit better in large file count pages. If you are trying to do this using manual database access in SQLite and want en masse statistical results, let me know. The database structure for this is a pain in the ass, and figuring out how to join it to my files vs all known files would be difficult going in blind.
>>8759 >Unironically being that guy Buddy, you just replied to a reply about easier updating with something that would make it ten times harder. Not to mention that hilariously dated meme. >>8760 Yeah, Choco passes /verysilent IIRC, and /DIR would work, but Powershell's quote parsing is fucking indecipherable, Choco's documention on the matter is outright wrong, and I can't 'sudo' in cmd. I'm considering writing a script to just produce update PRs for the Winget repo myself, since it's starting to seem like that would be easier, but I don't want to go through all of Github's API shit.
Pyside is nearly PyPy compatible (see https://bugreports.qt.io/browse/PYSIDE-535). What work would need to be done in Hydrus to support running under PyPy?
I noticed that the API method /add_tags/search_tags can only be limited to specific tag services, not specific file services. So with the API, if I want to do some wholesome searching for "curly_hair" images in my SFW file domain and I type "cur", then the NSFW favourite "cursed_tag" will appear among the results even though no images tagged with "cursed_tag" are within the SFW file domain to be retrieved. If we could do something like "/add_tags/search_tags?file_service_name=sfw", then that would hopefully let the privacy/safety level of the available tags match that of the available files. The only other way I thought about handling this was through tag migration to a separate NSFW tag service, but that would need constant updating to make sure all the new "cursed_tags" are filtered out as they get added to "all known tags" as new images enter the collection. On the other hand, file domains only change when new files are added and removed, so the existing pool of tags within them are less vulnerable to surprises.
I had a good week. I did a variety of small work and one important bug fix that should speed up media browsing for power users. The release should be as normal tomorrow.
https://www.youtube.com/watch?v=LBzE9JMoCeE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v490/Hydrus.Network.490.-.Linux.-.Executable.tar.gz I had a good week working on a variety of small jobs. highlights While working with a user, I discovered I had recently messed up the initialisation of the image caches, causing them to use default values and be too small for many power users. This is fixed, so I hope you will get some smoother media viewing, especially when it comes to very large images. I tidied up some weird file service bugs and annoyances that came from the multiple local file services transition. Fixed some weird 'delete from x' entries on the thumbnail menu, stopped spamming 'all my files' in places it wasn't helpful, figured out better service ordering, just some simple workflow stuff. And I think I have fixed another PTR processing bug some users had. If you have had 'this update file was missing' errors that wouldn't fix themselves, please try again and let the automatic maintenance run one more time--it should repair your records this time. And just a fun thing--Mr Bones now has more numbers, and more neatly laid out. full list - misc: - fixed a stupid bug that meant the image caches were initialising with default values (as under _speed and memory_) until you opened and OKed the options dialog (or did some other options-refresh events). sorry for the trouble, please enjoy some smoother image browsing. - mr bones now shows more numbers, and in a neater table. it should be clearer what the percentages are for now, too - the _manage->regenerate_ thumbnail menu has additional quick maintenance commands for presence and integrity checks and regenerating data in the similar files system - wrote a new 'special duplicate' button for the edit shortcut set dialog. the list on this dialog doesn't allow duplicates (which meant the old 'duplicate' button was doing nothing), so this duplicates the current actions with 'incremented' shortcut keys. 'a' becomes 'b', 'ctrl+5' becomes 'ctrl+6', and so on. it doesn't always work, but if you want to make ten shortcuts for setting rating 1-10, this should help - fixed an issue where the thumbnail banner text and the media viewer background text was not changing size or font according to QSS stylesheet rules (issue #1173) - SIGTERM should now cause a clean program exit (previously it killed the GUI App but left some daemon threads alive for thirty seconds or more). unlike SIGINT, it will not ask you if you are sure you want to exit or if you would like to do shutdown maintenance--it just closes the client promptly - fixed a bug in last week's importer page status improvements--the hard drive import page wasn't showing all the updates it should have - brushed up some backup help - . - file services: - fixed a bug where advanced users could set 'all known files'/'all known tags' on a search dropdown. this search domain is not supported - in the archive/delete filter, if the current location is 'all my files' and the files being deleted are only in one local file domain, the surplus 'all my files' will no longer appear at the top of the filter's commit dialog - the file services in the thumbnail select/remove menu are now sorted in the same order as the file domain button in search dropdowns - the thumbnail select/remove menus now exclude 'all my files' and 'all local files' if those choices are redundant (e.g. if you only have files in 'my files', 'all my files' will be hidden) - fixed some incorrect 'delete from x' actions appearing in thumbnail right-click menus - . - orphan files: - there's a persistent processing bug some users have where some update files are missing but they won't redownload correctly. I think I fix that this week naturally so existing maintenance routines will now be able to fix it themselves after another round - fixed some issues related to deleting files from the repository updates file domain. - the 'clear orphan file records' maintenance command now fixes the 'all my files' umbrella services as well as the 'all local files' one. it also has nicer description, does some additional file-removal cleanup, and triggers a file recount if problems are found - moved 'clear orphan files' to the 'files' maintenance menu next week Next week is a medium size job week. I want to have another go at building a larger metadata import/export pipeline. I want to start unifying tags, urls, ratings, everything into one thing that can eat up or spit out XML or JSON.
(139.62 KB 1200x1200 4ae.jpg)

>>8779 >flash hider Time to go all the way for what's ours.
The AUR is now 2 weeks out of date. Can someone flag it as such?
>>8781 The maintainer says he gets notifications from GitHub on each release, not sure how much it would affect anything. I did it for you anyway with a throwaway account.
Is there a log file of searches made? I had opened a search for an artist earlier, then closed the hydrus client. Now I can't remember what the artist name was! Arghh! Thanks
For a few versions, I haven't been able to seek in animations with the bar at the bottom (but videos work fine). If it's no bother, could this issue be looked at?
Also, is there a way to delete exact (pixel exact) duplicates, by telling hydrus to keep the one with the most tags, and delete any others? Lol, so I don't have to go through 15000 of them :p Thanks
>>8786 Thanks. The new version is up now, so maybe it made a difference?
>>8790 Seconding this.
>>8790 This, but by deleting the one with the larger file size
A couple bugs I want to report. On 488, apologies if something was fixed in the next versions. 1. When I start or stop the client API service, my entire session reloads. Every search page I have up does its search again. This causes quite the slowdown if I have a lot of search pages open. 2. After I updated from 482 to 488, I noticed that the media viewer sometimes kind of "gets stuck" on the previous image. It's noticeable if one image is much wider than the other - for just a moment, the first image will be moved so that its left side is where the left side where the second image would be. It doesn't happen very often, but it's a little annoying. It might just be because my session is too big or my laptop is old, though. Thanks, hydrusman.
>>8794 Well, it won't be larger, because I was talking about exact pixel duplicates. i.e. the same exact file.
(21.20 KB 273x500 black people.jpg)

(146.86 KB 273x500 black people.png)

>>8796 No, it only means the pixels are the same. You can't have two of the same exact file in Hydrus. Pics related are all exact pixel duplicates and have different file sizes, this can happen within the same format as well (i.e. PNG with level 1 compression vs level 9 compression).
>>8796 >exact pixel duplicates. i.e. the same exact file There is some shit tool that losslessly recompresses JPEG files, Pixiv started using it on all uploads 2 years ago, Danbooru fags disliked that. PNG is lossless on the pixel level, you can recompress that anytime. JPEG is lossless on macroblock level (stored as YCbCr), but software usually asks for RGB pixels from the decoder which causes slight differences when sent to recompress. Also, this fucking website tries to strip metadata so I always find duplicates in my downloads that had a "Paint Tool -SAI-" or "Adobe Photoshop" string removed.
How do I get Hydrus to display smileys in tags? Like in this username https://www.pixiv.net/en/users/16968790
>>8773 I am afraid I don't know much about PyPy, so I can't talk too cleverly. In terms of libraries, OpenCV is usually a pain, and in our case that means 'opencv-python-headless' on pip. I see on their homepage they say they support twisted and numpy, which are the other two nightmares that usually pop up. Others that do some funky things or are otherwise too integral to the program to ever replace: beautifulsoup4 Pillow python-mpv requests This page https://www.pypy.org/compat.html suggests everything works, but you don't get the JIT benefit unless you reshape your code? So maybe hydrus would work out of the box, but my weirdass code wouldn't get the benefit, perhaps? If you give this a go when PySide is ready, let me know how it goes and if I can do anything simple to help out! >>8775 Yep, thank you. I hope to add complex file location definitions to the Client API soon as part of finishing up multiple local file services. Should be within next couple of months. >>8788 No, sorry. I don't do a huge amount of logging of your actions. You might like to try doing searches for 'system:time' and then hitting the 'last viewed' tab. Anything you viewed in the last couple days, that sort of thing, may give you a refresher. Note that you need to look at an image for a few seconds for it to be registered in this system.
>>8789 Thank you for this report. I am sorry for the trouble. Can you talk more about this? I am afraid things seem to be working fine for me, so I need more info to be able to reproduce it. Under options->media, how are 'animations' set to display? Is it the native viewer, or mpv? Is it that in both media and preview windows? When you move your mouse down to the scanbar, what happens? Is it normally about three pixels tall and then jumps up to about thirty tall? What happens when you click the scanbar in an animation? Does the caret block move to where you click, or does nothing happen at all? Does the animation pause when you click, how does seeking not work? By animations, do you mean gifs? How about this apng I have tried to attach to this post (forgive if 8chan munges it)? >>8790 >>8792 >>8794 >>8796 >>8797 >>8798 Now that I rewrote the guts of the dupe filter, this is my dream for the next step of the duplicates system, and I hope to have something working well before the year is done. We can search the database for pixel dupes, so I think I can make a system to automatically resolve their pairs in certain situations. I am going to start with the easiest resolution first, which is 'A is a jpeg, B is a png'. In this case, A is always better, since B is a Clipboard.png of A that someone posted once. Once I have a workflow that does that simple decision well, I am going to hang more bells and whistles on it so it can make more decisions on things like 'yeah I want the larger one, always' and move to wider pairs than just pixel dupes. Eventually I hope to figure out a metric of similarity, so we can say something like 'if I download a file and there is a file in the client that looks more than 97% similar, but the new file is less than 80% quality/size/resolution, ditch the new file'. The most important part of this system will be that it is optional and configurable. As this short convo shows, people have radically different ideas on what is worth keeping and what is better, so I want to let you set the automatic rules that work for you. Same with the metric of similarity. Some people will only be comfortable with pixel duplicates, some will be happy with 99.8% similar, some will want to churn through things a bit faster at 95%, or whatever.
>>8795 Thank you for these reports. 1) Are you sure the searches actually refresh--like if you hit F5--or could it be just that the thumbnails are being reloaded? When you make a big service change, it triggers an options reload, and some graphical things refresh. This gets on another guy's nerves, I know, so I will make it not do this. 2) Yeah, this is some bullshit, and I don't know exactly what is going on. I had hoped a cache fix in 490 would have relieved it back to how it was a few months ago, but I'm still noticing it sometimes. You might notice things are better if you update, not sure. It mostly hits when you move from an image to a video or vice versa. There's a long-time layout problem at the core of this that I need to fix, so I also have a plan for that. I do know about the bug, and I will keep working on it, sorry for the trouble. I'm also going to have to dive into the code and figure out what I changed recently that made this worse. >>8799 Those characters are valid unicode, so they should just propagate just like any other text. I don't know a huge amount about the pixiv downloader (I'm not an IRL user of the site), so maybe it is cutting those characters off somehow? EDIT I tested it, the tag seems to parse ok in the downloader. Do you have any trouble with it? Secret tip if you are a Windows user: Hit Win+Period and you get their new actually-cool weird-character-enterer charmap replacement. 🔱
>>8801 >Now that I rewrote the guts of the dupe filter, this is my dream for the next step of the duplicates system... Thanks! That's what I'm looking for! I know I have a TON of dupes in my database.
I want to subscribe some artist tag on *booru (with ~200 images), and I already have the good-for-me bits downloaded do some import-manually dir - Is there a way to mark this subscription as up-to-date, so these not needed images won't be retrieved?
>>8802 I am on a newly installed Linux/PopOS, so it is possible I am missing a package. I do have ttf-ancient-fonts though. In Hydrus all I see is little squares. If I copy it to a text editor I do get to see the smileys. So it looks to me like it doesn't see the font. If I copy the artist name from Pixiv and put in in Hydrus it will not show me smileys either, but it does find the artist. Version 452 btw, might be that, but it worked fine on my Apple.
Any chance for a configurable larger page pre-fetch for thumbnails? Since memory usage is trivial it'd be especially useful to have the thumbs for a full page load in the background for instant viewing, vs the 1-2 second delay when scrolling.
>>8779 Nice, I've always liked Mr. Bones. Now I'm confident in my claim that 99% of everything is shit. I've started to download lots of files from kemono.party that don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help Also just noticed I've been using my current database for over a year now. I recently found an old backup of a database from before then but assumed they couldn't be combined. Anyway, thanks for continuing to improve this extremely useful program. >>8801 >A is always better, since B is a Clipboard.png of A that someone posted once. This is almost always the case, but I have seen at least one PNG that was a pixel dupe of a JPG and was smaller somehow. I don't know what the image was and probably deleted them anyway, but remember doing a double take. Not sure which should be considered better in a case like that.
>>8812 99% deleted? Holy shit. >don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help I have an import page with ~4000 files with this exact issue... I don't think you have much to worry about if you're capable of deleting 200,000 files... But are basic tags even necessary when you've only got 1700 files? >>8802 >Are you sure the searches actually refresh--like if you hit F5--or could it be just that the thumbnails are being reloaded? When you make a big service change, it triggers an options reload, and some graphical things refresh. Yes. If I make a search, remove some thumbnails from the page, and then start/stop the client API service, "Loading..." appears at the bottom, the stop button next to the search box appears, and eventually when it finishes the thumbnails I removed are back. When I just press apply on the normal options menu, the thumbnails reload but the search isn't done again.
for some reason my japanese ime doesn't work at all in any hydrus text boxes. even when I'm switched to it, my keyboard just inputs normal roman characters. I'm also not able to switch to and from the ime and normal text input when in hydrus windows. It's like it gets blocked or ignored. Hydrus is the only program so far that's given me this problem so I don't think it's the input method that's the issue. I tried that special insert mode but it still doesn't work. the input method I'm using is fcitx5 with kkc. I tried using mozc instead of kkc but that still doesn't work. and I'm on the linux version of hydrus if that's important.
I was ill this week and am short on work time. I will spend tomorrow doing some more normal work instead of the release. 491 should be on the 13th of July. Thanks everyone!
>>8817 I noticed this as well, ibus+fcitx5. I remember it working at some point in the past. Not a huge problem for me, I alias the most common Japanese tags to English ones because it's less of a hassle to search for English text either way.
>>8824 Retard moment, I actually use fcitx5+mozc.
>>8825 Hum. My fcitx +mozc is working well I remember fcitx5 is still wacky try using fcitx
Is there a way to search for files that have been previously deleted and re-imported? I know that Hydrus has this information because if I try to delete a file that I have deleted in the past with the advanced delete menu it has "keep previous reason" available. Something like this could help cleaning up an inbox if you accidentally left "exclude deleted files" off on a search. Maybe not useful for the average user but for someone like me it would be pretty useful. If there isn't, maybe something like "system:previously deleted" could be added.
When using the duplicate filter is it possible to copy ptr tags to my tags when copying from worse to better?
could you add a number next to the rating shapes for what rating you gave a file. It's kinda difficult to determine the rating of a file on a 20 point (plus 0 so 21) rating service just by the filled in shapes alone. In fact, I'd say that once you get past 10 shapes, it should probably be represented by just a number. But just having a number in addition to the shapes is fine with me. Another solution could be to represent 20 shape rating services by just being a 10 shape service that allows half-filled shapes. So a rating of 1 is a single half filled shape, 2 is 1 full shape, 4 is 2 full shapes, and so on until 20 is all 10 shapes filled. The only issue with this is that at a glance, it means that empty 10 and 20 shape rating services look the same. This could be solved with the little number next to the shapes I was just talking about, or it might not even need to be solved really, because a given filled up number of shapes on a "20" shape rating service, should about match the same rating as the same number of shapes filled in on a 10 shape service. By that I mean that 10/20 (5 shapes filled) is about equal to 5/10 (also 5 shapes filled). Of course there's also the solution of just splitting the shapes into 2 rows, but I have a feeling you didn't do that because it would make 1 rating service look like 2 different ones. Anyway that was a bit of a tangent. I really just came here to ask for the little numbers to make the rating easier to read, but that other stuff could be cool too.
(33.09 KB 945x218 2022-07-09_10-39-54.png)

I accidentally imported a couple of files into Hydrus that actually belonged somewhere else, managed by a separate application. I did not remove them from the Hydrus db, I just had the other application move the files to where they should be. In retrospect, I should have anticipated that Hydrus would take issue with this, and would not just silently and unquestioningly ignore these missing files. Whoops. Anyway now I have pic related. What can I do about it? How do I fix this outside of Hydrus?
>>8804 Not a simple way, but it isn't something to worry about too much. Not all boorus--but most--will offer 'md5' hashes on their site, which hydrus understands, and allows it to recognise if it already has a file and skip the download. For those sites that don't, you'll only have to do the surplus download once and then hydrus won't hit that URL again, so you are talking a couple hundred MB wasted at most, normally. If that amount of data is a problem for your internet connection, then I think you'll want to create your new subscription with very small 'first time' and 'periodic' file limits. I think the default is 100/100, so try instead setting it to 5/5, which will stop it reaching too far into the past. Come back in a month and raise it to 25/25. That's a bit hacky, so you are probably babysitting it for a bit no matter what. >>8805 Ah, yeah, if you get little squares, then your OS/Qt can't generate the right characters given your fonts. You might see a difference between the tag in your tagbox vs the tag in a thumbnail banner, like in my picture. iirc, they actually use slightly different graphics engines, so sometimes Qt can figure out a unicode character on the thumbnail banner when it can't on the taglist. Try to get some more font support for your OS, I guess? Normally any new OS can usually figure this stuff out. But I don't know anything about PopOS or Linux font stuff, I'm afraid. Or, if you are open to some ugly technical work, you could play around with the font settings in your QSS files in install_dir/static/qss (and options->style). Maybe one font can do it, but another can't? As a side thing, what about some more normal unicode, like these characters: 日本語 Do they show in a tag ok? I'd expect you can show those ok, but not the emoji things, and this is basically just a limited utf-8 character set in that font. >>8806 Good idea, thanks!
>>8812 This is cool. You are probably the most discerning user I have seen, well done! For the jpeg/png thing, I was working with a user the other day with a situation like this. It turned out the giganto jpeg actually had a fucking ton of Adobe garbage in its metadata. It was some layer/element definitions stuff that had somehow been embedded in the jpeg header. I'd still generally say the jpeg was 'better' in that case, since it was original, but a stripped jpeg that was pixel perfect would probably be better again. I am biased here, though--I harbour a deep hatred for pngs of busy raster graphics. As always, any of these systems will be highly customisable and optional, I know feelings differ. >>8814 Damn, thanks. I'll try to work on this this week, let me know if I fix it! >>8817 >>8824 Ah, damn, thanks for letting me know. My tag input has some weird focus stuff going on, so maybe that is conflicting with a new version on IME entry, or maybe I changed something on my end recently. Can you try two things for me? 1) Does IME work in a really generic text box, like the one options->gui->application display name? That's a boring text box that does nothing special, so if IME doesn't work there, this may be a Qt problem, not a hydrus problem. 2) Does IME work if you hit options->search and disable the 'autocomplete ... float ...' options? You'll have to restart the client to get it to kick in, but it will embed the autocomplete dropdown box into the page. This solves several weird autocomplete problems. If this is a Qt problem, I hope that we will be testing out Qt6 in the next few months, and that will have a lot of bug fixes, so that may be the time to revisit this.
>>8828 I am not sure, but I don't think so in the UI. When it is able to repopulate the 'keep previous reason' part, that is kind of by accident, and I am not sure how rigorous the database has been about keeping those records. I think for some periods, the full delete data has been removed, including previous delete reason. You can search deleted files, using 'system:file service', or by using the file domain selector button and selecting 'multiple locations' in help->advanced mode, but that searches the actual deleted file records. That stuff is always cleared out when you re-import. I don't officially keep track of which files have been re-imported after a delete. If you wanted to hack a solution to try and infer this data, you could do it using SQLite. You'd fetch all the 'current' hash_ids and intersect with all the hash_ids that had a deletion reason record. Let me know if you want to do this for a larger technical job. Unfortunately this stuff wouldn't really be compatible with anything in the UI, though. >>8829 No, I don't think so. It just does intra-service copying atm. Maybe in the future. If you are feeling clever, you could probably do this manually by doing a search for 'system:file relationships >0 dupes' and 'system:file relationships: is not the best quality file of its duplicate group' and then going ctrl+a, and then F3, and then in the manage tags window clicking cog button->migrate tags for these files, and then you'd be able to send PTR tags to 'my files' just for them. If you try this, make a backup beforehand, just in case it goes wrong. 'migrate tags' is powerful and dangerous. >>8830 Thanks, these are good ideas. Ratings are still pretty much on version 1.0 of their UI, and I should really get around to improving them and adding more display options. In the meantime, a really quick thing I can do is just say the rating number on their tooltip, so at least there is one way to read it.
>>8841 This dialog is not actually worried about missing files, but entire missing folders, so there may be something bigger going on here. If you go to install_dir/db/client_files, you should see 512 folders. 256 start fxx, 256 start txx. That dialog is worried that some of those are missing. Did your other application move or delete the entire folders? If it did, then a lot of files are moved somewhere The solution here is to move the fxx or txx folders back to hydrus's proper location and boot again. If you cannot recover the missing sub-folders, then you should create empty ones named according to what that dialog says, and then start the process of recovering your missing files. The document at install_dir/db/help my media files are broke.txt is good background reading here. If you are missing all 256 fxx folders, then you have a very serious problem. Let me know. Anyway, if you just have a couple missing files, hydrus doesn't mind too much. It'll boot ok, but if you try to load the files into thumbnails, it'll then notice and give you some polite error popups, and then you can start fixing it. As for the 'may be something bigger going on here', maybe this other application moved or deleted sub-folders, but if you know it wouldn't have, then you have a big deal problem with your hard drive. Again it depends on how serious the dialog you posted a screenshot of is. If you are just missing one 'txx' location, it isn't so bad, but if you are missing fifty three different places, then you have had a serious hard drive problem. help my db is broke.txt, also in the db directory, may be other useful background reading here, if you need to check your drive is healthy. Let me know how you get on!
>>8847 >Does IME work in a really generic text box, like the one options->gui->application display name? It doesn't. Trying to switch layouts to the ime just inputs a space, seemingly bypassing the shortcut (which for me is Super+Space, but I also tried Ctrl+Space before and that didn't work either) and just entering the raw text. If I manually switch to the japanese input outside the window, it just continues to input the raw characters. No converting to japanese characters. No underline. No suggestion box popping up. >Does IME work if you hit options->search and disable the 'autocomplete ... float ...' options? disabling those 2 options also seems to have no effect before or after restarting. >If this is a Qt problem, I hope that we will be testing out Qt6 in the next few months, and that will have a lot of bug fixes, so that may be the time to revisit this. that'd really suck but if that's the case then it's out of your hands so oh well I guess I'd have to wait.
Hey guys, I accidentally downloaded some pics, but forgot to check the grab tags box, so now they don't have any tags. Is there anyway to download the tags to these files without having to redownload the pics?
>>8852 I don't think hydrus ever actually redownload the file if you already have it in the database, just the page. in the tag import options, there should be an option to force page fetching even if hydrus recognizes the url. Turn that on temporarily and that should be what you want.
>>8853 Thanks, I'll try that. I wonder if there is a way you can just select the files, and then say "redownload these", and then set the "force page fetching", etc. to get whatever tag data, etc. you wanted without having to redownload the pic itself.
It appears that the option to disallow deleting archived files is buggy. It just silently ignores attempts to delete the file even if that's what you really want. By that I mean stuff like the duplicate filter will say that it's going to delete the file, but then nothing happens because the file is archived. Another example would be where you select some files to send to the trash, but the ones that are archived are just silently not deleted. I feel like instead of silently doing nothing, it'd be better if it could let you know that nothing happened because the file is archived and either cancel the whole operation or just the deletion of the archived files. or even better, just pop up an extra confirmation if you want to trash the archived files too. I like the safety of having something stopping me from accidentally deleting files I didn't want to delete, but I feel like this implementation is too simple and leads to annoying circumstances.
Hmm, I can see the files I need to regenerate tags for, I just don't know if there is any way to select them and tell hydrus to redownload the tags for them. Using system:untagged
>>8853 Thanks. That did work for a majority of the files.
>>8857 Ok, I figured out how to do this using Hydrus Companion. Search for all the files that have no tags, using System: Number of tags, 0. Select all the files, and tell hydrus to open them all in tabs on your web browser, using right click, Known URL's, Open. This will open them all in tabs on your web browser. Using Hydrus Companion, select "Send all tabs to Hydrus". This will import all the urls to Hydrus, including tags. You can now delete all the untagged images in hydrus, because hydrus now has duplicates of them that are tagged.
Was trying to use the ! operator in hydrus, but it's not working. Hit the *OR button in search, all the other operators seem to work, like &&, but ! doesn't. Neither does - or not. For instance, johnny -test, or johnny !test doesn't work. Putting a space between the operator and the tag doesn't work either.
>>8812 >I've started to download lots of files from kemono.party that don't have tags and my autism won't let me archive them until they have at least 20 or so basic tags, send help personally I have a 10 star rating for archive, If I encounter the exact same file 11 times (they start with 0 stars) and don't get rid of it, it moves into archive, at my discretion certain files move to archive without the 11 encounters. I find this speeds up parsing quite a bit along with a 'delete later' button that just sets a delete rating and moves to the next image, this way knee jerk decisions are encouraged, and I can re parse the delete via thumbnails before I fully delete.
>>8847 if it ever comes to a "these files are the same" I always, without any hesitation, go for the smaller file because there is no reason not to at that point. with that said, is there any way for the program, if we do get auto delete functions, for it to figure out if any of the files have something hidden in them? or even just a general check for all images. I know I have found ones with video/audio/rars hidden in them, and am kind of paranoid about that.
>>8861 confirming that this doesn't work for me either. The advanced searching must be bugged.
>>8864 Thanks for confirming. It used to work, but something broke.
There's this issue with nested tabs where, where using the shortcut to move to the next or previous tab, if you move to a tab that has subtabs (so a page of pages), the innermost row of tabs will capture the focus and cause the shortcuts to move between tabs in that innermost row instead of at the higher level that you were at before. Could you fix that behavior so that the shortcuts keep you at the level you're on when moving left and right between pages. Maybe you could also add 2 more shortcuts to move up and down a level for when you do want to move between rows.
I had a good couple of weeks. I overhauled an ancient system behind the scenes and did a heap of little jobs, fixing a bunch of bugs and improving quality of life. The release should be as normal tomorrow.
Maybe I'm missing something here, but when downloading a gallery with the "download -> url import"-type page, next pages aren't recognized / downloaded (the search log stops at the first page and states "1 successful"). The link would be parsed with the test parse of the parser; I can see the "next page url (priority 50)" in the results window. Is this intended / am I simply missing something / do I really need to make a gug just to get my next page? Thank you!
>>8873 Network > Logins > Manage Logins Make sure your logged in with the site.
>>8874 Oh wait, sorry, your talking about URL import. Not sure if login would affect that.
https://www.youtube.com/watch?v=OQEDWiM-QRI windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v491/Hydrus.Network.491.-.Linux.-.Executable.tar.gz I had a good couple of weeks doing a behind the scenes overhaul and a variety of quality of life work. I was ill last week and put off the release, but I am feeling great now. metadata import/export As is often the case, an important overhaul makes few actual changes front end. I've been trying to do this for a while, but now the 'export/import tags in a neighbouring .txt file' routine now works on completely new tech. Rather than hardcoded tags into .txt files, it now uses a modular system that I will be able to expand in future to support filtering and string processing, more metadata types like ratings and URLs, JSON and XML as the file formats, and will even allow funky migrations like converting tags to URLs. As a side benefit, Export Folders now support tags .txt export. For now, all the UI front-end looks the same, but please expect this to change in future. I'll be writing some nice unified panels and dialogs to handle the new objects as I write them. advanced user highlights The 'OR*' advanced tag input now supports system predicates! It uses the same system predicate parser as the Client API, so you can now type or copy/paste most system predicate text and get something both useful and complicated. The text that parses isn't always exactly the same as the predicate label, so check out the big list of example parseable system predicates here: https://hydrusnetwork.github.io/hydrus/developer_api.html#get_files_search_files Also, if you are a parser creator, String Processing now has a Tag Filter processing step. Let's say you can grab all the tags from somewhere but you need to filter out a handful of non-tag text like '+' and '?', or you are able to create hydrus namespaced tags and want to filter by namespace, just insert this into your string processing and it should be much easier than messing around with long regexes. full list - system predicates: - the advanced OR input, where you can type tags in complicated logical expressions, now supports system predicates! most system predicates are supported using their typical display strings. it uses the same engine as the client api, so check the examples here https://hydrusnetwork.github.io/hydrus/developer_api.html#get_files_search_files sorry for the delay here - the advanced input also runs tags better through the hydrus tag 'cleaning' process, so things like whitespace between the namespace colon and the subtag are cleaned up correctly, and invalid tags should be excluded - it also starts with the keyboard focus in the text input - and I think I fixed an issue with '!'', 'not', or '-' negation prefixes not parsing - highlighted the example parseable system predicate texts in the Client API help, and added 'last viewed' to it - . - misc: - altering your services in _manage services_ no longer causes a full page refresh for all currently open search pages - in a related thing, if you click the file or tag domain of a file search page to be the same as it just was, you no longer get a page refresh - the rating widgets now show their current rating value on their tooltips - when setting a numerical rating by a drag, it no longer matters if your mouse strays above or below the widget--it will still set - the String Processing system has a new 'String Tag Filter' processing step. this applies the normal tag filtering object to your list of strings and also performs the hydrus 'tag cleaning' process on them, making them all lowercase and trimming whitespace and so on - the sibling/parent sync is now even more polite when told to do work in 'normal' time. this has been hitting a lot of new users really hard, so it should now really trickle work during normal time, throttling down when it hits a bump to avoid stunlocking you but also responding quickly to recent changes if you are fully synced - the database repair code is now better at healing damaged fast-text-search (FTS) tables. previously, in cases of partial damage to the virtual table, the repair code would error out - fixed a bug where certain search predicate calendar dates that are acceptable in Linux but not in Windows caused Windows to fail to load the session. if you put in 1965 as a search date, it should now revert to the current time one next load etc... - the test to see if a directory is writeable-to is improved and now handles Windows's Program Files directory correctly - improved how the boot scripts handle incorrect/bad database directory paths. the error handling works better, and it figures out a fallback location for crash.log better - a new button on 'review services' now lets advanced users copy the service key to the clipboard - the migrate tags dialog now lists file repositories, ipfs services, and 'all my files' as potential file filter domains - when checking it has space for a large transaction like a vacuum, hydrus now tries to check if you are running on a ramdisk or other severely space-limited temp dir and offers more text if this is true - updated the '4chan style thread api parser' to handle posts with multiple files, which fixes tvchan.moe and probably anything else running NPFchan - some logic testing around showing 'return to inbox' and the actual operation is fixed so it only applies to local files. in some weird advanced situations, you could previously send deleted files to inbox - . - new import/export framework: - started a new modular metadata import/export pipeline. this thing starts out today by doing the work of newline-separated tags in a .txt sidecar file and will expand to do all sorts of metadata in other formats like JSON and XML. it will also, eventually, support arbitrary cross-type conversions like tags to urls or ratings to tags - export folders now support '.txt' sidecar tag exporting! - the '.txt' sidecar tag importing in import folders or manual imports is now handled by the new pipeline - the '.txt' sidecar exporting in the manual export dialog is now handled by the new pipeline - please expect the UI around '.txt' sidecar importing and exporting to change significantly in future. you'll be selecting different metadata types to import or export, make string processing steps to alter or filter what you get, and of course be able to compile it all into more complicated filetypes - . - cleanup and refactoring: - mr bones gets two new columns to line up the numbers better - a bunch of export code got moved around. created a new module 'exporting', and moved ClientExporting.py to it, renaming to ClientExportingFiles.py - removed an old prototype for sidecar exporting and related plans for UI - the 'missing file folders on boot' dialog now points users to 'help my media files are broke.txt' - brushed up the 'help my x is broke.txt' documents in the database directory a little - fixed some surplus double backslashes in the help - a secret tiny label change/fix, let's see if anyone notices - cleaned up how the rating widgets manage and update rating state. it was ancient bad code - updated how different rating values are converted to UI text - misc cleanup of some free space checking code - fixed some bad quote characters in client api help JSON examples - improved some error handling for uploading pending content and sped up file uploads a little next week Next week is a cleanup week. I'll try and break up some more monolithic database code.
Nice! Thank you!!
>>8876 Welcome back OP and thanks.
I think the descriptions in the "speed and memory" options should indicate more clearly that the "in % of cache" limits are restricting based on an image's dimensions alone. Perhaps stating instead: "Maximum cache requirement (in %) of an image" - "at most a *WidthxHeight* or equivalent image".
The NOT operand seems to work again. For instance -chair seems to block any pics with the tag chair. If you want it with other tags, using - with AND seems to work. i.e red hair and -chair seems to work.
Is there a way to get subscriptions to notify you if when a download fails instead of it just silently going to the next download? maybe a message popping up with the download number and error note?
>>8850 Damn, thank you for the update. I will do some investigation to see if this really is some Qt5 issue or what the hell is going on here. >>8854 In future, yeah, select the files you want, and then right-click->known urls->copy 7 safebooru urls or whatever and then paste those URLs into a new 'urls downloader' page that you've set up with the right 'get the page anyway' tag import options. It'll skip redownloading the actual image in all these cases, by the way. It generally tries to be efficient as possible. When you say 'force page fetch', it only gets the html page (which has the tags). >>8856 Thanks. I think this is a good idea. I will search for the places this tech works and see if I can improve the workflow and add another yes/no or something.
>>8861 >>8864 >>8880 Sorry for the trouble and thanks for the report, I saw it late tuesday night. I must have broken it some time ago by accident. Let me know if you have any more problems. >>8863 This is probably tricky to do automatically, since by its nature, any hidden content is interesting and something you need a human brain to figure out and understand. What I can do is add rules so you can very carefully specify what you want deleted according to your own confidence. Maybe you only start with the uncontroversial jpg/png swap, and then later, if I develop a 'system:exif has some interesting shit!' search predicate, you can swap that in to shape what an automatic decider will or won't act on, or maybe instead of deleting a file like that, it gets sent to a different file service for you to put human eyes on later. This might also be a job for the Client API, if there are good external scanning tools. Also, while people feel differently, I think this specific subject may also a FOMO thing. I struggle with this myself a lot, but I'm trying to teach myself that with millions of files, most that I put through the archive/delete filter that are just 'ok', I am probably never going to see again in my whole life, so my saving them may end up being mostly a moot point. There's no point sweating at the thought of losing one or two things here and there, because there is always a firehose of new great things coming tomorrow, and you can't keep up with that either. >>8869 I was talking with some users about this a few weeks ago. Unfortunately it is tricky for me to support this behaviour without rejiggering a bunch of how these actions work. I will investigate to see what else I can do. Someone did mention that if you hit shift+tab several times on a page, the focus will actually go to the page tab, and Qt can handle navigation keys like left/right better than my in-built shortcuts.
>>8873 Yep, sorry, the urls downloader page can only do one page of a gallery page. That page type doesn't have all the guts of the cleverer downloader. You can bodge what you want, I think, if you open a gallery downloader page, create a dummy query using whatever text you want, pause and kill that query in the 'search log', and then (maybe with help->advanced mode on), click the dropdown arrow on 'search log' and select 'import->from clipboard'. >>8879 Thanks, I will. >>8887 They are an automatic system, so I don't like to have them spam too much at you, lest it make 1,200 popups one day. If you are downloading from a site that regularly has connection trouble, I recommend you make a weekly job to check your 'manage subscriptions' window and look for '2F' in the 'items' column, and hit 'retry failed' there. In future, I want to have a nicer domain-based error system that can handle all this better. Maybe then subscriptions will have a nice avenue to report problems on a particular domain and even come back and try again later automatically, so you won't even notice anything was ever wrong.
Is hydrus multi-threaded? I've been considering between an intel i5-12400 and i5-12600, and was wondering if the additional cores found in the 12600 will result in better performance
>>8896 theyre literally identical one just has a higher clock speed so obviously it will execute everything faster regardless of what program, they literally just tossed in a better graphics chip to justify reselling the same cpu but overclocked
Could you adjust the way that the "if one is archived archive the other" option in the duplicate merge options works so that it won't archive a file if you chose to delete that file in the duplicate filter. Because of the way it works now, I noticed a bug where if you have the "don't allow deletion of archived files" option enabled, the duplicate filter will archive the file that you marked to delete in the duplicate filter, then fail to delete the file because of that stop archive deletion option, and just silently move on, not deleting the file at all. I caught this after a week of doing a bunch of duplicate filtering stuff and... yeah I had to do a lot of looking through those files again. I like both of those options, so making the merge archiving not archive files you told the filter to delete so that the filter can properly delete it would help.
>>8897 Ah I should have been referring to the 12600K, which has additional efficiency cores
>>8900 strange chip, still only has 6 real cores, the other 4 are single thread cpus with some other architecture and low clock speed, no idea what their purpose is they look like low power phone cpus, even without them its still faster regardless of multithreading because the normal cores have a higher clock speed, also theres 2mb of extra cache in this one which does a lot to boost programs written by idiots that cant optimize, makes no difference for optimized programs though, but if its pyshit you know which category it belongs to
>>8770 Just wanted to say, I tested this shit in command prompt, too, and it's not powershell's fault, Chocolatey is just a piece of shit with broken quote parsing, incorrect documentation and no interest in learning about their own mistakes. Fuck chocolatey.
(6.40 KB 264x231 ime example.png)

>>8817 >>8824 >>8850 >>8889 Hey, I am sorry to be the bearer of bad news, but I figured out how IME works on Windows today and did not have any trouble turning it on. I think therefore that the typical Qt hooks that enable IME are working ok, and I haven't fundamentally customised anything to break that. So, with the proviso that I have never used this stuff before so I'm no expert, I think this is an error related to: A) fcitx5/kkc/mozc, or your system overall, maybe a recent update on their end that conflicts with Qt5? B) Your shortcut to turn on IME is conflicting with the hydrus shortcuts system. C) It is something I did, but for whatever reason it only affects the Linux build. Could be something esoteric like the github build environment. If you can, please test B by turning on help->debug->report modes->shortcut report mode and then trying to turn your IME on in the tag search box. You should get some popups--is it trying to match what you type to an action? On Windows, I set the shortcut to ctrl+space, it worked ok on clever and dump text inputs. For C, we might be able to test if there is a difference between the built release and the source, assuming you are running the built release. If this guy is running from source >>8827 , that may explain why he is working, and this is indeed some weird setting where 'Qt make IME work library' isn't being added for whatever reason in github. If you are also running from source (e.g. I believe the Arch package does), then the problem wouldn't be with the build, though. Last ditch possibility: I am expecting to roll out test builds in Qt6, probably in a few weeks. This will have a lot of fixes on Qt5, so maybe whatever is unhappy here gets fixed magically. Let me know how you get on!
>>8896 >>8897 >>8900 >>8901 Hydrus is on stock python, which means it has 'threads', but they all have to share the same physical core. However, when heavy math occurs, stuff like thumbnail generation, I drop down to an optimised C++ library and sometimes FFMPEG as an external exe, where there is no limit. A CPU with more cores would see faster imports if you had, say, ten different things trying to import video at the same time, but I think it would only be 5-10% faster during that heavy period. The main bottleneck in hydrus is my shitty blocking UI code, which is my job to fix. Best thing you can generally do for hydrus performance is keep your session slender (under ten million weight in the pages menu) and run the database off an SSD, which I assume you are already planning to do. >>8898 Thanks, I'll try to figure this out. It looks like the dupe filter needs a couple of better hooks to deal with the archive-lock option, as in >>8856 .
>>8906 >You should get some popups--is it trying to match what you type to an action? I don't get any popups. Instead it when I hit Super+Space, it just passes the space input through as a space, seeming to ignore the Super. I switched the shortcut to swap inputs to Ctrl+Space, and when I tried using that shortcut, it did give a popup that the shortcut was passing through and that it was "in a state to catch it" but nothing else happened. With the Ctrl+Space shortcut, it didn't pass a space character through to the text box. It just did nothing. I don't think this is related to the shortcut though, because even using the mouse to click the input button on the panel (I think it's called taskbar on windows) it still doesn't let me switch inputs when hydrus is the focused window. It's like hydrus itself is blocking the input method when it has focus. >If you are also running from source (e.g. I believe the Arch package does), then the problem wouldn't be with the build, though. I'm not running from source. I'm using the prebuilt Linux release. I'm not actually a programmer so I don't really know how to run things from source and compile software and stuff like that. >Last ditch possibility: I am expecting to roll out test builds in Qt6, probably in a few weeks. This will have a lot of fixes on Qt5, so maybe whatever is unhappy here gets fixed magically. I'll try that out when it comes if nothing else works.
I had a good week. I did a mix of background cleanup along with quality of life and other improvements, mostly for advanced users. The release should be as normal tomorrow. >>8908 Damn. Well, at least we know it isn't something stupid like the shortcut system silently eating it. You saying >It's like hydrus itself is blocking the input method when it has focus. makes me think this might be related to some weird floating window stuff I do with the autocomplete dropdown and the popup toaster, although that doesn't fully explain why text inputs in the options dialog wouldn't work either. I have a plan to rewrite all this in future, so if this is my fault, maybe I'll accidentally fix this. >I'm not actually a programmer Sorry, yeah, I forgot to say, running from source is a pain unless you are familiar with python. Only an option if we have no other ideas and we get good info that it might actually help. Please let me know how you continue to go on here.
Why do I keep getting new results under the search term "system:num file relationships: has more than 0 duplicates" most of the time my subscriptions download new files, but after I've already sorted out the duplicates using the dupe finder page? In other words there are regularly happening dupes that the dupe filter ignores, but they are searchable with a dedicated expression. Every time this happens I have to select all found dupes, manually dissolve their dupe groups, and only then the dupe filter would catch them. Kinda annoying to do that all the time.
https://www.youtube.com/watch?v=N9LFp_brHvE windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v492/Hydrus.Network.492.-.Linux.-.Executable.tar.gz I had a good week mostly cleaning code and adding some things for advanced users. highlights If you are in advanced mode, the file sort and collect controls now have a 'tags' button. This lets you determine which tag service that particular sort/collect applies to. If you are this tag-clever, let me know how it works for you. This tag button is the same thing that autocomplete dropdowns use, and I expect it to soon get a 'multiple location' makeover like the file button did with multiple file services. The archived file delete lock (under options->files and trash) gets a pass this week. If you try to delete files that are currently locked, it now makes a popup with a button to see those specific files, so you can decide what to do. The duplicate filter also handles the different situations (like 'archive both files' + 'delete the worse file') better. The duplicate filter also now shows if one or both files have an ICC profile. The shortcut actions to 'move page selection home/left/right/end' now try to stay on the same level if you hit them several times. If you use these actions, try them out through a mix of page of pages and you'll see how it works now. It remembers the current level within three seconds of the last move event. This was requested by several users, and there isn't a nice way to do it so I hacked an answer, so let me know what you think. full list - sort and collect updates: - for big brain users, the collect control now has a tag domain button. it only shows if you are in advanced mode (issue #572) - the sort control also has a tag domain button hidden behind advanced mode. it applies to system:num tags and namespace sorting - the collect control now appears on all import pages - . - archived file delete lock: - the duplicate processing action code now no longer archives files that are due for deletion right before that deletion. this was hitting the archive delete lock - if archive delete lock is on and the 'other' file in the duplicate filter is archived, the option to 'this is better, delete the other' is now disabled - if you attempt to delete a delete-locked file during normal browsing, or if an automatic system like export folders wants to but fails on some, a popup is now made with a button to show the files that were filtered out so you can review the situation and fix it if you want - I am considering adding a dialog to say 'hey, this is locked, want to send back to inbox?' to fix these situations in a nice way, but I think this is probably a bad idea in terms of workflow, design, and my sanity given all the edge cases and potential future expansions of lock rules. maybe I'll add a simple 'delete and override lock checks' option, but a lock is a lock tbh. for now, I will focus on this better UI feedback of currently delete-locked files and make it simpler for humans to remove any locks - . - misc: - using black magic, I have made it so the shortcuts for 'move left/right one page' 'and 'move home/end' do not dip down to the lowest level of a neighbouring page of pages for the next command. it now stays on the current tab level for three seconds after the most recent move command. this works in testing but may be jank in some IRL situations, so if this matters to you, let me know how it works out - fixed a bug in 'do a full metadata resync' that meant unprocessed row orphans were not being deleted, which lead to lingering 1950/2000-style processed gauges that didn't actually cause any work to be done on 'process now' - the duplicate filter now shows if one or both files have an icc profile. for now the score for this is always 0, neutral - I think I have reduced general lag on some busy clients - . - code cleaning and minor fixes: - refactored file viewing stats management to a new database module - refactored file physical storage management to a new database module - cleaned up an ugly bridge that made inbox/archive work and moved it all to a clean new separate database module - improved some client file physical storage repair code, both in how it repairs and how it recovers in the current boot - updated the yes/no dialog texts when you apply 'not related' or 'alternates' to a selection - added a bunch of tooltips to the 'speed and memory' options panel. also clarified the example image sizes in number of pixels - improved how my grid layout propagates tooltips from the widget to the text when the widget is compound and in its own layout - consolidated where the delete lock test occurs to just one location for db, gui - added infrastructure to filter and report delete-locked files. callers no longer care about specific lock rules, opening this up to future expansion - cleaned and simplified some duplicate action processing code - cleaned up some file collect code, optimised it a bit too - the sort control now only changes sort type on mouse wheel events if the mouse is over that button - renamed 'tag search context' to 'tag context' across the program, mirroring a recent change with the location context, and gave it some bells and whistles. in future, the tag context will hold multiple tag services - wrote a new button to edit tag contexts next week Next week is small jobs. I have a bunch of different things piled up that I want to get to, and I'll see if I can catch up with some longer term bug reports too.
Just installed this for the first time and added the public tag repo. Its taking a really long time to process the tags. Is that normal? Any way to speed it up?
>>8863 >if it ever comes to a "these files are the same" I always, without any hesitation, go for the smaller file because there is no reason not to at that point. i can think of one reason: tag importing from other sites. for example, imagine you download a file directly from pixiv. then later you download a pixel-for-pixel duplicate of the file from danbooru, which is a slightly smaller filesize but hasn't been tagged very well. in the duplicate filter, you delete the slightly larger filesize one from pixiv. then later, the pixiv file is uploaded to gelbooru, and is tagged properly. you try to import from gelbooru file, but since you've previously deleted it, it gets ignored. none of the tags from gelbooru are added to the file, since it's deleted. and the pixel-for-pixel duplicate from danbooru is still barely tagged. so in some cases it's better to keep a larger file if it has a better source that is more likely to propagate to other websites. alternatively, maybe there could be an option for downloading tags even if the file has been previously deleted, and an option for "sharing" tags between files that have been detected to be pixel-for-pixel duplicates? probably unnecessary.
Is there a way to make an import folder automatically convert video files to another format before importing? Like mkv to mp4?
>>8846 Thanks for your response. I don't have any font settings in options->style. I tried changing my system fonts but it doesn't look like Hydrus uses any of those. And as you expected, kanji and stuff shows fine (Korean too). Just smileys don't. I tried to change the font in a .qss file but that didn't do anything, probably cuz I have no idea what I'm doing there.
>>8745 >If they are the same quality, then it doesn't ultimately matter much which is promoted to king, since they are the same. But isn't this vulnerable to the "intransitivity of indifference" issue? Shouldn't hydrus keep track of same-quality relationships as long as the files are tied for king so that a noticeably worse file doesn't end up accidentally becoming king due to bad filter pairings? Like, it might not find pairs that are actually better/worse pairs because the better file got treated by hydrus as a worse dupe in a "same-quality" decision earlier in the filter, and this can lead to the wrong file becoming king. I originally wrote a much longer reply trying to explain what I mean because I feel like what I said could be confusing, but that would've probably just been more confusing. I don't know what the guy you responded to wanted, but the adjustment I'm talking about is only concerned with preserving "same quality" relationships between files that are tied for king, since that's the only time it matters. So not exactly like the old duplicate system that you got rid of. I get why that one's gone.
When I watch a thread to import it into a file service , Y, it doesn't import files deleted from file service X.
Is there an option to just throw all files down a page of pages into a new page?
Is there a way to "forget" all deleted files? So that they don't interfere with "exclude previously deleted files" anymore
>>8564 >If you really really need this record removed Nah, while I would rather have them removed, it's not a huge issue. Besides, there's a ton of files I'd like the hashes removed, and I'm not even sure what they are anymore because I no longer have the original files. >Unfortunately I just don't have a scanning routine in place yet to categorise every possible reference to every hash_id in your database to automatically determine when it is ok to remove a hash, and then to extend that to enable a complete 'ok now delete every possible connection so we can wipe the hash' command. I know it's most likely a long ways away, but do you have a rough timeline of when you would like to get started on this?
>>8911 I am sorry, I am not sure I totally understand your problem. Are you finding that the '>0 dupes' system predicate is actually getting new results after your subscriptions finish running, or is it only when you do some duplicate filtering that you get more results? If it is the latter, then this is what I would expect. If you say 'these files are dupes, this is better quality' and so on, then those are duplicate relationships. If you mean the 'potentials', those that the duplicate finder page searches through in order to populate the filter, that is searched with the same 'system:file relationships' predicate, but uses 'potential duplicates'. A downloader can definitely add new potentials, but then your work in the filter will recognise those potentials into proper duplicates. Maybe you can walk me through a typical session with a concrete example and you can help me figure out what you are trying to do when you dissolve the groups and so on. Normally I would say that a group dissolve is a rare job, something you would do in an awkward maintenance situation where you were trying to undo a problem. >>8916 Yep, it has about ten years worth of uploads on it, and it has to do some heavy CPU on all that stuff. It is best left to work in the background, so if you can possibly help it, just leave it alone and it will do little bits of work here and there if you leave the client on, or bursts when you shut it down otherwise, and you'll soon see some tags appear. I'd estimate it takes a couple months of background work to fully catch up, and thereafter five or ten minutes a day. If you are worried it is processing too slow, a typical decent speed on an SSD is 3,000-20,000 rows/s, and on an HDD you get 100-1,000. You should not try to sync to the PTR on an HDD, it is too big these days. Let me know if you are getting too slow on an SSD. You can double-check these numbers over the long term by searching your log file at install_dir/db/client - date.log. >>8918 Not in hydrus, so you'll want to set up your own external script that converts from a staging area to another folder and then moves the finished product into your hydrus import folder. Bear in mind that you don't want to convert too many videos, and certainly not images, if you sync with the PTR, as conversion changes the files' hashes, which means they won't get tags on the PTR.
>>8920 Shame. At the moment, yeah, there are no font options in the actual style page, and everything has to go through ugly QSS. I hope to have some better options here in future. I'm hoping to roll out a Qt6 version of hydrus in the coming weeks. Please give it a go when convenient and let me know if it helps anything here. >>8921 I hadn't heard of 'intransitivity of indifference' before, that sounds like an interesting problem that might come up here from time to time. My general priority for duplicates is currently in improving processing time. I don't mean to repeat myself, but I'll say my original system was very complicated, remembered every decision perfectly, and presented every file within a group against all possible combinations to try to get the genuine best version. Unfortunately, it proved way too complicated to maintain and just caused frustration from all the very small choices I was throwing at users. All this was avalanched by the similar files detector, which works pretty good and gives most users tens or hundreds of thousands of potential duplicates, even at 'exact match' distance. The problem is that we have too many pairs to go through and not enough time to get through them. So, while it may be true that if you have a set of very similar dupes that nonetheless have a clear best/worst when you compare the best and worst, I am not sure I can justify making the system complicated enough to try to handle that. I presume I'd have to keep track of all the current pair- or triplet- kings and then any time that group was compared to another file, I'd need to compare that file against all the brother-kings and offer UI for the user to select somehow which was the true king, now they were all arrayed. It just seems like too much when what I really should be focusing on is an automatic system to detect shitty jpeg quality and png versions of jpegs and throwing that in front of the user, since that's most of our problem space. But I'll keep this in mind. Let me know if you have any more thoughts on this topic. >>8937 Thanks. Yeah, I guess this is true. The 'exclude previously deleted files' test applies to any file in the trash or removed from trash, so it'll do this. How can I make this work better for you without turning off the feature entirely? Maybe in 'file import options', where it says 'exclude previously deleted files', I can add another checkbox like 'only if they were deleted from the destination file service?' something like that?
>>8965 >I'm hoping to roll out a Qt6 version of hydrus in the coming weeks. Not the one you replied to, but will there be both Qt 5 and 6 versions out for a while, or is it going straight to Qt 6 only? If there are two versions, how will this affect the AUR?
>>8953 Not sure. There's a few admin/meta commands if you right-click on a media page or page of pages tab, like 'send pages to the right down to a new page of pages', but do you want something specifically for files, like 'vacuum up all the files inside here and put them in one new page'? What would be convenient shapes for this action, for you? Maybe if I made it work on a row, so all the pages in a row, it sucks up all the files and adds them to one page at the end of the row? Or would you rather it worked like a tree, on all the levels down inside a page of pages, and put them on a new page beside that page of pages tab? Or something else? >>8958 Yes, under review services->all local files (you might need help->advanced mode on to see this). 'clear deleted files record'. Make sure you make a backup before you try this! It may not work how you imagined, so make a backup, and if it all shits up your workflow on next boot, or if it simply breaks, roll back to the backup. >>8963 I'm chipping away at the database module refactoring. I did a bit more in last week's cleanup. Basically my new modules have a function where they say 'hey, I am responsible for three tables, and those tables have file definitions here and here', and then I will build a maintenance routine that will check items in the master file table and be able to check every table in the database to check if it is now an orphan, and also some maintenance UI so humans can see too where they still have definitions lingering for particular files (like the deleted files table). The modules also do the modern repair work when a database boots with missing tables. So, ClientDB.py is down to 610KB, after starting at 980KB or something last year(?). I figure I probably have twelve or fifteen modules still to spin out of it, and then, when I have 100% coverage, I can start working with it and add new structure again. The whole directory is now 1.29MB just of database code...
>>8966 Yes, now sure how long, but I want to ease into Qt6, so we'll figure out simultaneous builds for a while and ask advanced users to try out 6 so we can hammer out problems. Maybe two months(?), I am not sure. Qt6 will break Windows 7 support, maybe some macOS stuff too, and doubtless it will be a festival of violence on the weirder Linux Window Managers. If the test all fucks up everywhere, I'm totally willing to put it off another year. But it does have a ton of bug fixes, so I do want to go up if possible. For some reason, the Qt guys locked all the backported Qt5 patches behind an enterprise paywall, so everyone on 5 is like two years behind. Also, fingers-crossed, there won't need to be many/any actual code changes, so if the AUR guys want to stay on 5, it should be possible just by staying on their current build script, and if I need to change three lines to allow this, that's no problem at all. Anyone who wants to stay on 5 can run from source for as long as 5 holds out.
hello hydrus anons! how do i go about getting started with stuff like tagging when I have 50,000 images imported?
>>8968 >but do you want something specifically for files, like 'vacuum up all the files inside here and put them in one new page'? I meant something like "open image in a new page" but recursive for pages of pages.
I had to do a database recovery from a failing disk. I followed the instructions from the various text files to recover/regenerate my databases, and I managed to recover most of the data. I'm still seeing some problems I'm unsure how to debug, though. I'm getting a DBException on start that I can ignore for the most part, but the same exception shows up sometimes when I do a search which stops the client from completing it. If I restrict the search it'll sometimes go away, but if I make it more generic it'll also sometimes go away (e.g. A and B will cause the exception, but neither A nor A, B, and C will cause it). I'm not sure if it's related, but the numbers in the PTR service are off, too. It'll say that the "client is caught up to service and can upload content," but it shows the definitions, mappings, tag parents, and tag siblings as being only ~98% complete (4820/4888 on definitions at the time of writing) even though when I click "process now" nothing happens. I'm still getting stuff from the PTR as when the day rolls over it'll download the new update and process it as normal. v492, linux, frozen DBException DataMissing: Did not find all entries for those hash ids! Traceback (most recent call last): File "hydrus/core/HydrusThreading.py", line 401, in run callable( *args, **kwargs ) File "hydrus/client/gui/pages/ClientGUIManagement.py", line 5344, in THREADDoQuery more_media_results = controller.Read( 'media_results_from_ids', sub_query_hash_ids ) File "hydrus/core/HydrusController.py", line 684, in Read return self._Read( action, *args, **kwargs ) File "hydrus/core/HydrusController.py", line 200, in _Read result = self.db.Read( action, *args, **kwargs ) File "hydrus/core/HydrusDB.py", line 927, in Read return job.GetResult() File "hydrus/core/HydrusData.py", line 2057, in GetResult raise e hydrus.core.HydrusExceptions.DBException: DataMissing: Did not find all entries for those hash ids! Database Traceback (most recent call last): File "hydrus/core/HydrusDB.py", line 610, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 7873, in _Read elif action == 'media_results_from_ids': result = self._GetMediaResults( *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 4994, in _GetMediaResults missing_hash_ids_to_hashes = self.modules_hashes_local_cache.GetHashIdsToHashes( hash_ids = missing_hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 178, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 80, in _PopulateHashIdsToHashesCache hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = uncached_hash_ids ) File "hydrus/client/db/ClientDBMaster.py", line 274, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids, exception_on_error = True ) File "hydrus/client/db/ClientDBMaster.py", line 94, in _PopulateHashIdsToHashesCache raise HydrusExceptions.DataMissing( 'Did not find all entries for those hash ids!' ) hydrus.core.HydrusExceptions.DataMissing: Did not find all entries for those hash ids! Database Traceback (most recent call last): File "hydrus/core/HydrusDB.py", line 610, in _ProcessJob result = self._Read( action, *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 7873, in _Read elif action == 'media_results_from_ids': result = self._GetMediaResults( *args, **kwargs ) File "hydrus/client/db/ClientDB.py", line 4994, in _GetMediaResults missing_hash_ids_to_hashes = self.modules_hashes_local_cache.GetHashIdsToHashes( hash_ids = missing_hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 178, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids ) File "hydrus/client/db/ClientDBDefinitionsCache.py", line 80, in _PopulateHashIdsToHashesCache hash_ids_to_hashes = self.modules_hashes.GetHashIdsToHashes( hash_ids = uncached_hash_ids ) File "hydrus/client/db/ClientDBMaster.py", line 274, in GetHashIdsToHashes self._PopulateHashIdsToHashesCache( hash_ids, exception_on_error = True ) File "hydrus/client/db/ClientDBMaster.py", line 94, in _PopulateHashIdsToHashesCache raise HydrusExceptions.DataMissing( 'Did not find all entries for those hash ids!' ) hydrus.core.HydrusExceptions.DataMissing: Did not find all entries for those hash ids!
I have a parser that generates multiple urls with different priorities. The second url always exists but the first one is better for the files that are in the server. Is there an option for downloaders to check for alternate urls if the first one fails for some reason (e.g. 404)?
>>8911 >>8964 I'm just asking why is there a situation when the dupe filter sees no more dupes, but they can be found by that system search expression. What can I do to avoid that? My setup is rather complicated as I'm using Hydrus for managing albums and posts in a facebook-like website, I'm not sure I'll be able to explain all specifics here. But basically, those special dupes happen when the site changes the URL schemes to images, so they were downloaded by old-style URL and then Hydrus encounters them again by a different URL, but the MD5 also don't match, because of new compression methods that the site owners constantly introduce. Otherwise images look exactly the same. I guess the different urls confuse Hydrus, but I still can't understand why the dedicated dupe filter won't see the obviously duplicating images without dissolving their dupe groups after I found them manually using the >0 search expression from a regular "files" tab.
Getting this error when manually setting multiple images as alternates. v492, win32, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus\client\gui\ClientGUIShortcuts.py", line 1257, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus\client\gui\ClientGUIShortcuts.py", line 1197, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
Is there any way to change the default position of the manage tag window? It pretty annoying to have to move it every time I tag something because as blocks the preview window.
could you add a new sort option for the tag list that sorts them by relative prominence as an alternative to the absolute "count" sort that we have now? by this I mean that the sort would sort tags based on how commonly they appear on the current page vs how commonly they appear overall in your database. This would help make it so that universally common tags like "female" for example wouldn't always be at the top of the list and instead you'd get a list of the tags that are more uniquely common the page you're looking at compared to in general. So a tag that appear in 1% of all your files, but 2% of the files on this page would be sorted highly because it appears on twice the percentage of files here than it does overall. I hope something like that wouldn't be complicated to implement because it would be really useful to me, even though I know it sounds like a minor thing.
>>8968 >I figure I probably have twelve or fifteen modules still to spin out of it, and then, when I have 100% coverage, I can start working with it and add new structure again. Nice! So this feature isn't too far off?
How to make Hydrus download every link in a text file? ex: $ cat test.txt https://danbooru.donmai.us/posts/5537292 https://danbooru.donmai.us/posts/5537294 test.txt contains almost 100k links so pasting them into the download box is not an option.
does deleting the file log for subscriptions make hydrus try to start all over again, or is that log separate for hydrus's "memory" of what sub files it already downloaded? Is it safe to the log every once in a while to clear it out?
Hi devanon, with the latest update (this problem might have existed for earlier versions, but I had not noticed then), I get an error when attempting to set images as alternates in the preview page, either through the right click menu or a keyboard shortcut. The error message is as follows: v492, win32, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus\client\gui\ClientGUIMenus.py", line 213, in event_callable callable( *args, **kwargs ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus\client\gui\pages\ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
>>8992 Have you considered using curl and xargs with the `/add_urls/add_url` POST API endpoint? I'd do something like: cat test.txt | xargs -I{} curl -XPOST -H 'Hydrus-Client-API-Access-Key: YOURAPIKEYGOESHERE' --json '{"url": "{}"}' http://127.0.0.1:45869/add_urls/add_url The above is untested, so please test on a small subset of your file before trying this on 100k links. Also you need curl 7.82.0 or up to use the --json option; you could also do it with --data and a few other options if needed. I would also add the optional `destination_page_name` to the json object to get a new page, but it'd work without.
trying to use the "set selected files as alternates" action (both through the shortcut and through the context menu) gives this error. this must be a new bug because this didn't happen on the last version. v492, linux, frozen UnboundLocalError local variable 'message' referenced before assignment File "hydrus/client/gui/ClientGUIShortcuts.py", line 1257, in eventFilter shortcut_processed = self._ProcessShortcut( shortcut ) File "hydrus/client/gui/ClientGUIShortcuts.py", line 1197, in _ProcessShortcut command_processed = self._parent.ProcessApplicationCommand( command ) File "hydrus/client/gui/pages/ClientGUIResults.py", line 2368, in ProcessApplicationCommand self._SetDuplicates( HC.DUPLICATE_ALTERNATE ) File "hydrus/client/gui/pages/ClientGUIResults.py", line 1853, in _SetDuplicates result = ClientGUIDialogsQuick.GetYesNo( self, message, yes_label = yes_label, no_label = no_label )
>>9006 Thank you very much! I had no idea Hydrus had an API.
I had a good week. I fixed some bugs, cleared out some jank, and wrote a prototype EXIF data viewer. The release should be as normal tomorrow. >>8986 >>9001 >>9007 Thanks, sorry for the trouble, this was a stupid typo and is fixed tomorrow!
>>8990 I second this, and have an additional idea. Sometimes when there are too few results to collect enough statistical data for sorting, it becomes impossible to order tags by their incidence properly (for example you might have 10 tags each having 3 images with them, no way to order those among themselves by incidence). But if you use their popularity in the whole collection in addition to current selection, then such a group can still be sorted in a meaningful way.
https://www.youtube.com/watch?v=5VygwTBgph4 windows zip: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Windows.-.Extract.only.zip exe: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Windows.-.Installer.exe macOS app: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.macOS.-.App.dmg linux tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v493/Hydrus.Network.493.-.Linux.-.Executable.tar.gz I had a great week working on some fixes and prototype EXIF support. EXIF EXIF are metadata labels embedded in some media files, usually JPEG. They might say the make of camera that took a picture, or its aperture/ISO, the GPS coordinates it was taken at, how an image is rotated, or the DPI of a logo for printing purposes. There are many different possible fields. I have wanted to add support to view and even search this data for a long time, and this week we start with something simple but not user-friendly. The media viewer now has a 'cog' icon on the top hover. On JPEGs, it lets you check for EXIF data. Most files don't have it, but if it does (usually this is photos or professional art exported from Photoshop etc...), it now throws up a little window to see every field. The duplicate filter now actively scans for EXIF data and says if one or both files have it, just like the recent addition for ICC Profiles. Many websites strip EXIF data on upload, so if you have two exact dupes, the one with EXIF data is probably closer to the 'original' version. Now I have this framework, I would like to extend it. Beyond general polish like replacing the cog icon with something nicer, and only enable it when I know there is some EXIF to show, I want to cache 'has exif data yes/no' in the database and allow you to search by that. I expect I'll also add the actual EXIF data itself to the database one day, so you'll be able to search all your pictures for iPhone 6 photos or whatever. So, if you are interested in EXIF, please give this a go and let me know what you think. This feature was taking so long to happen that I decided to just spam out a rough v1.0 that I can keep improving. full list - EXIF: - in the first step of 'official' EXIF support, the media viewer now has a 'cog' button on the top hover, enabled when looking at a jpeg, that will check the file for EXIF data. if found, it will throw it up on a simple new window that shows EXIF id, label, and value. this is a hacked-together prototype, not super user-friendly, but it works. let me know what you think, and please send me any files that have weird EXIF that doesn't parse right but you think should. I already discovered a file with a null character that wouldn't display in UI, that sort of thing - GPS EXIF values are also parsed and extracted - made it so you can double-click a row in this new window to copy an EXIF value to clipboard - in the duplicate filter, if one or both files have exif data, this is now noted in the comparison statements, just like ICC profile! (issue #469) - obvious future extensions here will be storing 'has exif' in the database and allowing its presence to be searchable and enabling the cog button (or a nicer 'exif' button) only when there is known data to see. a subsequent step would be actually caching the data in the database for full EXIF search - as a side thing, we're now set up on the hydrus end to pull TIFF EXIF, but PIL doesn't seem to offer it, so we'll have to wait for a different solution there - . - fixes and misc: - fixed a problem that made saved page file sorts reset their sort order one time on update to v492. thank you to a user for noticing this and discovering the fix, and I'm very sorry for the inconvenience of changing your session and favourite search sorts. unfortunately there is no easy fix other than rolling back to a backup and jumping forward to this version - fixed a v492 message display error when setting various duplicate relationships to three or more thumbnails at once. it was a stupid typo, sorry for the trouble! (issue #1199) - if a page tab name elides to a 'shorter...' length, it now has its full name as the tooltip - fixed a typo in update code error handling (issue #1192) - the duplicate filter page now remembers if you are 'searching immediately'/'search paused' (issue #1193) - if you are on non-Windows and export files manually or with an export folder to an NTFS or exFAT partition, this is now detected, and NTFS-invalid characters in the pattern-generated folders or filename are now replaced with underscores (issue #1194) - 'fixed' a system predicate bug in the 'OR*' advanced predicate parser--entering a logical expression that results in a negated system tag now causes an error. previously, it would strip the 'system:' and just enter the given text as an unnamespaced tag. furthermore, that dialog now reports specific error reasons when it fails to parse. I hope to improve support for negated system tags in future--some stuff, like archive/inbox, should be easy. - I think I fixed an instance where the archive/delete filter's confirmation dialog could present 'delete from hard disk' as an option when it wasn't appropriate - in an attempt to reduce the media-change flickering we've recently seen in the media viewer, I untangled a bunch of the canvas size/position code this week. I'm preparing a complete overhaul and neat Qt layout integration, which this starts. I _think_ I've made some things less flickery on occasion, but we'll see IRL. much more to do - added a '--profile_mode' launch argument, which allows you to capture the performance of boot and also try out profile mode on the server (although support there is very limited atm) next week Next week is a medium size job week. I want to put some time into note parsing. I am not sure how far I can get, but fingers crossed I can actually get 'note import options' and a note content parser working, and we'll be able to update the existing downloaders to grab artist notes and things.
I know this is probably something brought up a lot, but I think hydrus would benefit greatly from having actual collection items. The idea against collections is that hydrus works on individual items, and that any valid collection can just be made by filtering for the things the files have in common, like a post id or a manually set collection tag. I don't think this is really equivalent to real collections, though. There are times when you want to consider a collection as an item in and of itself, with its own tags and single link to all its contents. Not only that, but doing things the current way way is awkward because it forces you to adapt your sort rules whenever you want to care about collections. But more importantly because collections can be implemented without breaking those design ideas. In this system, collections are individual items. They are added as independent objects and can then have files assigned to them. A collection claiming files as its children does not consume or group the files, just references them as being children of the collection. The collection item itself would be treated as a file-like item with its own tags & info. Only instead of being a real file to view, it acts as a link to all of its contents. For thumbnail purposes it could maybe let you set a certain child to act as the thumbnail. Actually "using" the collection could work a number of ways, but one way would be for it to open a page containing all of its children, with a saved search / sort rule applied to the page. Or it could open a viewer for all of its files with a custom sort. The collection being an item rather than just something like a 'set:xxxx' tag makes it so you can filter for & see them whenever they match a search they should match. Any information relevant to the collection as a whole can apply to the collection item alone, you no longer need to pollute members of a set with tags about the whole thing that may not apply to individual items. As for the children, they would still be treated as just normal files, and wouldn't be modified besides maybe being able to see what collections are claiming a file. There is no loss of information with adding them to a collection, they will still match any searches they should as individual items. I don't know about the technical implementation but this system seems to me like it would be able to add a proper collection feature while not disturbing hydrus' design, and it could be made easy to use with a right click > 'collect files' action. You could also have settings to hide collections or to hide collection children, for situations where you wanted to only see one and not the other.
When trying to download files from 8chan.moe, I get this error message sometimes: ("read error: Error([('SSL routines', '', 'unexpected eof while reading')])",)… (Copy note to see full error) Traceback (most recent call last): File "urllib3\contrib\pyopenssl.py", line 313, in recv_into File "OpenSSL\SSL.py", line 1897, in recv_into File "OpenSSL\SSL.py", line 1700, in _raise_ssl_error File "OpenSSL\_util.py", line 55, in exception_from_error_queue OpenSSL.SSL.Error: [('SSL routines', '', 'unexpected eof while reading')] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hydrus\client\importing\ClientImportFileSeeds.py", line 1406, in WorkOnURL self.DownloadAndImportRawFile( file_url, file_import_options, network_job_factory, network_job_presentation_context_factory, status_hook, file_seed_cache = file_seed_cache ) File "hydrus\client\importing\ClientImportFileSeeds.py", line 433, in DownloadAndImportRawFile network_job.WaitUntilDone() File "hydrus\client\networking\ClientNetworkingJobs.py", line 1885, in WaitUntilDone raise self._error_exception File "hydrus\client\networking\ClientNetworkingJobs.py", line 1505, in Start more_to_download = self._ReadResponse( response, stream_dest ) File "hydrus\client\networking\ClientNetworkingJobs.py", line 512, in _ReadResponse for chunk in response.iter_content( chunk_size = 65536 ): File "requests\models.py", line 751, in generate File "urllib3\response.py", line 575, in stream File "urllib3\response.py", line 518, in read File "http\client.py", line 455, in read File "http\client.py", line 499, in readinto File "socket.py", line 669, in readinto File "urllib3\contrib\pyopenssl.py", line 332, in recv_into ssl.SSLError: ("read error: Error([('SSL routines', '', 'unexpected eof while reading')])",)
Is there a hydrus client that allows tag editing? So far I've only seen readonly booru implementations, but I need a way to let other people help me with tagging my large collection.
>>8977 If you are willing to invest the resources (about 50GB of SSD space, some CPU), your best bet is to sync with the PTR. Once you become a power user with getting on 100k files, your best shot to get tags is to get the 1.6 billion tags from that. More info here: https://hydrusnetwork.github.io/hydrus/PTR.html It is an investment though. If you don't want to get the PTR, you should get started with the downloaders. Redownloading files you already have isn't as wasteful as you think, as the client can often skip the actual file download, or at least only has to do it one time, and then it can grab all the tags and link you up with known URLs and stuff too. Have a play around with one of your favourite creators on a booru you like, see if it gets lots of 'already in db' results and adds some tags. Note if your 50k files were converted at any time, like with a batch optimising/resizing program, then hydrus will have a lot more trouble getting tags for them. It relies on files' content not changing even a single byte to line up tags from shared sources. In this case, I still think it is worth getting started with downloaders and grabbing the 'canonical' versions of files, since you'll be able to de-duplicate your dupes in future. If your files are family photos or other personal stuff that isn't tagged online, you're mostly out of luck and will have to tag them yourself. Figure out a workflow and get grinding, good luck! >>8981 Thanks. I think this is doable. >>8982 Thanks. I think you are mostly good here, even though it sucks for now. Please turn on help->advanced mode and then open review services and go to the PTR tab. Hit reset processing->fill in definition gaps, and tell it to reprocess. That should fix your missing hash id problems. You might like to do the 'content gaps' too, once definition is done. Both jobs should be fairly fast, but they'll slow down when they hit a gap to fill. That 4820/4888 problem I think I fixed recently. Hit reset downloading->do a full metadata resync on the same panel, then 'refresh account' to push it along. Should fix you up. Actually, do that before the 'reset processing' stuff.
>>8984 Unfortunately there is not. We have run into this before though, I think with DA, and I want to have this tech in when I next do a big iteration on the downloader system. There is no good solution for now. >>8985 I'm afraid I think you have confused the 'dupe' term on two different things in hydrus. 'potential duplicate' = hydrus has scanned your files and found two that look similar 'duplicate' = you have said that two potential dupes are indeed dupes, this permanent relationship is saved to the database The duplicate filter searches your 'potential duplicates', the 'system:file relationships >0 dupes' searches your actual 'duplicates'. The duplicate filter converts potential duplicates into normal duplicates. If you process your potential dupes in the filter until you have 0 left, you are going to be creating some dupes. This is normal. No worries if you can't explain the specifics on your end. Let me know if I can be any more clear about anything. >>8989 Yeah, hit up options->gui. The 'frame locations' table is hellish ugly, but if you edit the values for 'manage_tags_dialog' (thumbs) and 'manage_tags_frame' (media viewer), you can make it inherit the parent's position or have a fixed position or whatever. Should be able to save its last position too.
>>8990 >>9020 Thanks. This is a really smart idea, and I like it. I am not ready to do it, as the lag and complexity of fetching this information live during sorting would make it not feasible, but in future I am expecting to update my basic tag object from the current 'string of text' to a full object that will have all sorts of metadata hung on it. At that point, I could inform all tags on the point of loading of their basic incidence and then sort it live with that cached data. I'll keep this in mind, as there are probably some more optimal ways of figuring it out and I may be able to hack in a way to do it earlier. Might be I could just store the top 500 tags in memory and delist them a bit to stop them crowding things all the time. I was working with a user a little while ago on improving the stats behind 'related' tags in the manage tags window, so I'm going to be thinking about this again soonish. >>8991 Let's say 1-4 modules every month or so, as I do one cleanup week every four and most cleanup weeks are spent at least some on db cleanup, so I think the database cleanup will be done early next year. Then I'll be in a position to start for real on definition recycling tech. I hate estimates though. It sucks, but most of the time they are about 30-50% of the actual time needed. This cleanup thing is a slow background grind, too, something I do in 'off' time. >>8992 >>9011 You can also copy the URLs to clipboard and import by clicking the arrow next to 'file log' on your downloader page and then going 'import new sources->from clipboard'. Or just opening a 'urls' downloader page and clicking the paste button. API is the way though if you have a lot though. Your 100k in one page should be ok, but it might lag some things out a bit, so you might want to click the file log arrow again and 'delete x successful from the queue' when you are done with them to keep things lean.
>>9000 Yeah, clearing the file log would cause a sub to resync I think. I'd recommend you stay away from editing it. Subs automatically clip their file logs to about 200 recent URLs regularly, and they do so cleverly so it won't mess with their check times and stuff, so you don't have to do anything to keep them clear. >>9037 Thank you. That's basically a timed-out connection I think. I'll see if I can improve the handling of this error. It can presumably be retried a couple times and should be reported better too. >>9043 Not yet. Advanced users could try to set up their own Tag Repository, but it would be a lot of work.
>>9028 I agree completely, especially the idea of the 'virtual' collection that has files in it but is its own taggable thing. There are several things holding this up, mostly shared by CBZ/CBR support. Stuff like: 'what is a multi-page file? how is it stored and searched in the database? how do we recognise it?' 'what does fixed multi-file file order look like? do we cache/track that in any way?' 'how to show and browse a multi-page file in the media viewer? do we support bookmarks, or at least page position transitions between preview and media viewer?' 'how do we deal with the individual files inside a multi-page file? can the user tag them individually? can we show the collection as a navigable link when seeing the individual file?' Once I have those and similar tech questions answered for CBZs, I think it won't be nearly so difficult to imagine tacking a virtual CBZ system on top. I regret how bad hydrus has been at this all these years, and I really want to have two-page mangas and stuff tied together, or progressions where you have a character changing clothes or something over a series. 'File alternates', a planned extension of the file duplicate system, will also share some of this tech, particularly fixed file ordering for things like WIPs. So it'll wait for when I am working on CBZ I think, it will be a ton of work unfortunately since it goes into core systems all over the place. It'll be some time, probably 2024 at the 'everything went right' earliest. 'Comic support' in this priority list: https://github.com/hydrusnetwork/hydrus/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc
Hey guys, any mass downloader for hentai manga out there comparable to Hydrus? I have yet to find any that actually work. Thanks.
1.How to create custom namespaces and give them their own distinct color like 'creator:' or 'series:' ? I don't want all custom namespaces to have the same color. 2.I also want to rename the 'creator:' namespace to 'artist:', any way to do that? 3.How to I rename a tag?
Made a pictoa parser. No gug because the urls are a clusterfuck, it just parses the page with simple url downloader.
Is there a way to open a collection sequentially with an external image viewer?
>>9060 Try Hakuneko
>>9055 Regarding the dupes, when I finish resolving them using the duplicate finder, I always use the "this file is better and remove the other", so there shouldn't be any dupes left after I finish working there. Moreover, the dupes found by the search expression never even appeared in the duplicate finder, I'd remember seeing the same images again. They happen independently of the dupe finder. The only way they'll get there is when I apply the dissolving of groups to them.
>>9071 Thanks! I'll try that one.
>>>/v/658001 I was just on the edge of beginning to use hydrus, but then I was going through the beginning user guide and came across a couple issues. >Anything that needs a filename for purposes outside of filesearching >Anything that needs to be ordered Why must hydrus replace a filename with the file's hash? Is it that technically cumbersome for it to just check the hashes directly when it needs them? If it didn't, that would solve pretty much both of my main issues with the program. Filenames could still be used effectively for the purposes other than searching that many anons enjoy, and importing a folder of sequential images, and then displaying them sequentially is as simple as tagging the whole folder with tag specific to the set, and sorting alphanumerically by filename when searching the set tag. The latter instead of, what I presume, is tagging the whole folder containing the set with a set specific tag attached to the pages function, and then going through each image, displayed out of order since hydrus doesn't have any way to order them, and manually numbering them with the pages function? >>8184 >Hydrus sucks at organizing files that are meant to be a sequential series. I was afraid of this. It seems like there's some "pages" function everyone is talking about, yet this is still an issue.
>>9075 I've been talking with an anon in the other thread, and collections likely solve the issue of sequential images well enough for me. As for filenames, I can import files while automatically tagging them as their filenames under the namespace "filename", and I can export them to a folder and restore their filenames if I choose [filename] for the export under the Filenames option. However, neither he nor I can find an option make all manner of exports default to using this function. What I'd ideally like is to be able to set export to default to essentially restoring the original filenames, so when I export via drag and drop to upload files to any site that displays filenames such as this one and most other imageboards, the filename is in tact for other anons to read, whether it contains a joke, pixiv ID, or other informative info.
One of my pages is showed trashed files that aren't actually in my trash, and I can't delete these trashed files. The thumbnails appear red with the hydrus logo
>>9079 nevermind, didn't notice it was a query page. Just had to click the query twice to refresh the page to make the deleted files disappear.
(29.52 KB 405x267 hydrus tags.JPG)

>>9075 I use this regex (?<=\\)(.*?)(?=\\) It will take the folder structure as parse it as tags, that way I can filter by folder and sort by title tag. For sadpanda stuff I use Lanraragi
>>9083 >sort by title tag. Neat. But I still need a way to smoothly export via drag and drop while restoring filenames for the purpose of quickly posting the images now that Hydrus lets me quickly find them. If it needs to save them first somewhere else in some temp folder that deletes the files as soon as the next set of drag and drop exports is attempted, so be it. I want to skip the step of opening the export menu, entering [filename] for the export, saving in a folder, then opening that folder, then posting the images, every time I want post anything with a filename. If I can't do this, Hydrus is pretty much a no-go for me.
>>9084 I'd love that, in file>options>gui there are options for that behavior in discord, but it never works.
>>9064 1. Under file>options>tag presentation 2. No idea, sorry 3. You could sibling it, right click tag>siblings>add siblings, or you could just select every instance of a tag and remove it while adding your replacement tag.
>>9075 >>9076 >>9084 Disregard this, I suck cocks. Hydrus great, Hydrus is merciful, All hail Hydrus! >>>/v/659663 >>9085 The dicksword drag and drop export options actually just apply to all drag and drop exports and should be renamed. If it's not working, try selecting the checkbox for the drag and drop bugfix.
I had a great week with two big changes. First, there are Qt6 test builds for advanced users to try out, and second, in prep for a soon-to-come Note Import Options, I added filetype filtering and 'use default at time of import' tech to File Import Options. The release should be as normal tomorrow.
>>9092 hey check >>9090
>>9090 Fuck I had the filename pattern wrong, thnk anon
https://www.youtube.com/watch?v=85bvcndvEpo windows Qt5 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt5.-.Extract.only.zip Qt6 zip: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt6.-.Extract.only.zip Qt5 exe: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Windows.Qt5.-.Installer.exe macOS Qt5 app: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.macOS.Qt5.-.App.dmg Qt6 app: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.macOS.Qt6.-.App.dmg linux Qt5 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Linux.Qt5.-.Executable.tar.gz Qt6 tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v494/Hydrus.Network.494.-.Linux.Qt6.-.Executable.tar.gz I had a great week with two big changes. First, there is an optional 'Qt6' version of the program for advanced users to try out, and second, File Import Options has some important updates. Qt5 and Qt6 If you are a regular user, stick with the Qt5 versions of the release this week. It is the same as before. Hydrus is moving up to a new version of its UI library, Qt. The new version has a ton of bug fixes and generally better support for newer OS concepts like UI scaling. I am going to be putting out releases for both 5 and 6 for a month or two, testing it with advanced and then normal users, and then will switch to 6 exclusively. Everything seems to be going well, and you don't have to do anything. If you are an advanced user though, please try out the Qt6 builds. They work exactly the same as the old ones. Just to be careful, I recommend you not try them on your real database first off, and doubly so if you do not have a great backup. I am not worried about database damage, but you never know, and if there are problems, I don't want to give you inconvenience on your main install. Try a fresh extract on your desktop first to make sure it boots ok, and then delete that extract. Then, if you want to try it on your real database, try doing a 'clean install' as here: https://hydrusnetwork.github.io/hydrus/getting_started_installing.html#clean_installs If you don't do a clean install, you'll still have Qt5 dlls in your install folder and the client will default to the older version (for now). If you are a macOS user, you don't have the concept of 'clean install', so just run the App as normal, but make sure you have a backup of your database first. There's also no Windows Qt6 installer yet. You can check help->about to see what version of Qt is currently running. So far, the update has been remarkably smooth for me, with very few bugs. A user has been watching the situation for me and kindly provided a patch to deal with the most important syntax changes, so moving over has not been a massive pain in the neck. I've been using it IRL for a few days now and I think things are just that bit smoother and less flickery. I am particularly interested in Linux and macOS users' feedback. So far, the main limitation I know about is that Windows 7 can't run Qt6 (it is just too old), but there may be other issues in other platforms. Let me know, and we'll see if we can iron them out. I am going to keep hydrus Qt5 compatible, so anyone who needs to stay on it but wants to keep updating will have the option of running from source. file import options My other plan for this week was getting 'note parsing' working for the downloader. I laid out everything I would have to do, and the first bulky thing I have to do is get 'Note Import Options' integrated. That will be a surprising amount of work, and over some ugly areas, so I decided to take things a little slower and do some cleanup beforehand so I could do it properly. So, this week I overhauled some of how File Import Options works. There is some behind the scenes work to make all the import options work a little nicer, and also, on the front end, File Import Options now does two new things: First, File Import Options now has a filetype filter. You can say 'only allow jpegs' or 'only allow video' or whatever you like, just like the 'system:filetype' search predicate. Import Folders used to have this too, but they send those settings down to their File Import Options when they update this week. Second, File Import Options now have the idea of being 'default' like Tag Import Options do. All your existing File Import Options will stay as they are, but any new ones will be in 'default' state, meaning at the time of import, your settings under options->importing will be used instead. This makes it easier to edit your File Import Options en masse, since there is only one place to go for most changes you ever want to make. The manage subscriptions dialog now has a 'overwrite file import options' button too, if you do want to mass-set some specific File Import Options across your subs. You might like to just set them all to default this week--I think I will. This 'default' concept is going to be applied to Note Import Options too. I am still thinking about how extensive the defaults system should be for File and Note Import Options. At the moment, File Import Options still just has the 'quiet' and 'loud' defaults in the options dialog, but I could expand things so you can set a default File Import Options for particular domains as you can for Tag Import Options. I'm thinking of combining them all into one tabbed edit dialog, so I may also extract the Presentation Options out of File Import Options, since those may have a different shape of 'defaults' than File Import Options most of the time. If you care about this stuff, let me know what you think.
full list - QT6: - thanks to a user's help, we are rolling out a Qt6 test build this week. we've been running Qt5 for a few years now. 6 is mostly a very large bugfix patch, and I am hopeful this update will relieve several legacy issues related to UI scale, colour support, draw flickering, and other unusual stuff. so far, it is working for me great. I'll be putting out joint 5 and 6 builds for 4-8 weeks, to iron out any big problems, and then I'll switch over to 6 releases exclusively. if you are an advanced user, please give it a go this or next week and let me know if you run into any traceback errors about deprecated method names or completely jank layout in the less used parts of the program - the actual changes you'll see are mostly style, just slightly different font spacing, things like that. if you have a system-baked Qt5 style that hydrus magically inherits, this will no longer work, you need to get a Qt6 version of the style (although I understand this is happening already for the popular styles, so you may already have them) - users on Windows 7 and similarly old OS versions are unable to run Qt6 programs, sorry! - I intend to keep the code 5-compatible, and users who run from source can choose whichever version of Qt they prefer, as here in the help: https://hydrusnetwork.github.io/hydrus/running_from_source.html#qt - the linux Qt6 build also goes up from ubuntu 18.04 to 20.04. let me know if you have any trouble, but it feels like it is time to update this too - . - file import options overhaul: - I wanted to do note parsing this week, but when I reviewed the whole job, there wasn't enough time to do it properly. so, in prep for a cleaner introduction of 'note import options' next week, I am overhauling how the other import options do some stuff - all file import options now support filetype filtering! it uses the same control as system:filetype or in import folders, but with some improved logic. on update, existing import folder filetype settings will be copied down to the file import options - file import options now work on a similar 'default' system as tag import options. existing file import options will stay as-is, but new ones will begin in a 'use the default settings at time of import' state. those defaults are editable under _options->importing_. for now I am not adding a 'use this file import options default for this web domain' system, but it might happen in future. let's see how this all shakes out first - the file import options button now has a right-click menu like the tag import options button - the manage subscriptions panel now has a 'overwrite file import options' button to mass-set FIO - cleaned up a bunch of old file import and import options code - . - misc: - system:filetype now remembers meta filetypes better. if you select 'all video', it will now still select all video even if hydev adds support for a new video type in future. also if you select 'video + animations', it'll say that rather than listing out every possible specific-type - fixed an issue where loading a favourite search wasn't always setting 'include current/pending' values on the buttons correct - fixed up a status display in the gallery downloader and watcher pages--if you pause an importer while it is doing work, it now says 'pausing...' as its status until any current jobs are finished. it was giving empty text before, as if it were finished already - fixed some unusual behaviour with downloader highlighting where the first query pended to an empty page was secretly highlighted for the next session load, and fixed the 'subscription gap downloader' also doing this and not obeying the normal 'highlight new downloaders if nothing already highlighted' option - improved the error when the 'make sure this directory exists' function runs into a file with that pathname - fixed a rare selection position error, maybe Qt6 only, when clicking in the thumbnail grid as it is loading - . - boring Qt6 code cleanup: - as a side thing, I set up quick-launch environments for QtPy5, QtPy6, PySide2, and PySide6 in my IDE this week, so I can now test all these situations and jump back in time no problem in future - integrated a user's patch to bring us up to Qt6 compatibility and did a little more work to get it backwards compatible with older qtpy and Qt5 - refactored the critical Qt boot setup and monkeypatching from QtPorting to a new QtInit module - migrated the hydrus code for keyboardModifiers, event-pos, and globalPos all to the Qt6 equivalents so the monkeypatching is always going to be on older versions looking forward - fiddled with QPoint and QPointF conversions a little so I _think_ Qt5 and Qt6 is always talking about the same type - updated build scripts and requirements.txts for the new situation - updated the help a bit for the new situation next week Note Import Options! I'm going to focus on it. I'll see if I can merge all the Import Options together, get the note merge tech we need working and tested, and then get some actual note parsing working in the downloader so we can play around with it.
>>9090 >>9094 Thanks, I will make sure to rename the option here. Sorry for the confusion around this. I think that was originally an experimental thing that got accidentally formalised. I will read through your posts again properly and talk more on Saturday, but if it helps at all, I have some FAQ discussion on my thoughts around filenames and hashes and folders for our situation as background reading here: https://hydrusnetwork.github.io/hydrus/faq.html#filenames
I updated to the Qt6 build and now the whole client is like 720p for whatever reason
>>9101 I Read the relevant FAQ section before ever complaining. Pic related is why you still need filenames.
Whenever I press f3 to begin tagging an image, the window that appears is left aligned, sitting right on top of the image preview, so I have to drag it to the right every time to view the image clearly as I tag it. Does anyone know if there is any way to move either where the window appears, or to move the image preview to the right?
When tagging new files and trying to add tags to multiple files at once, it's easy enough to select a group of consecutive files in the inbox and tag them all together, but is there any way to select non-consecutive files as a group? I would have through shift clicking would accomplish this, but it just selects everything in between two files I click, same as using shift+arrow keys.
>>9099 >Tag import options This reminded me, the default tag blacklist is too damn hard to get to. As it is now, you go Network > Downloaders > Manage default tag import options (opens new window) > Tag import options (new window) > Blacklisting on ... (new window, target). I think it should be much easier to get to (say Tags > Blacklist), and having the option to (un)blacklist a tag on a selected file via right click menu would be amazing as well.
I've noticed when I'm going through tagging my files, I often forget some things I need to tag on them that I need for every file or at least most files I'm currently tagging. Is there a way to for it to display suggested namespaces that aren't yet used on the selected file(s)? For example, pretty much every file I tag whether is it's completely safe for work, suggestive, or explicit, under the namespace "suggestiveness". Or for large amounts of files containing characters, things like whether there's a male or female depicted, using the namespace "sex". I'd like a display, probably underneath the recent tags list, showing suggested missing namespaces I've picked out that I use fairly often. Is there something like that?
And while I'm at it, is there a way to mass edit all instances of a certain tag or namespace? I've made mistakes in my early tagging. When it's just replacing a tag with another similar tag I think is better, it's easy enough to remove it, and then add the new tag, but this often involves retyping things like the namespace unnecessarily when it would be easier to just edit the existing tag. It's a lot more cumbersome if I fuck up a namespace. For instance, I didn't realize that "character:" was an already existing special namespace that's color coded in green. I went and used the namespace "name:", and now I have to manually change every set of "name:*" tags to the proper namespace. If I could just edit all instance of the namespace directly, this would be incredibly less tedious.
>>9133 And I now realize I'm retarded. I could have just deleted the default 'character' namespace in the tag presentation options, added the 'name' namespace, and turned it green. I think I might be too stupid to use hydrus.
How do I make the creator tags appearing over the top of every thumbnail with a creator namespace tag go away? I thought deselecting "on thumbnail top: creator - series - title" would accomplish this but it does not. >>9121 Nevermind, I am again fucking stupid. Ctrl select instead of shift select accomplishes this function.
Now that I'm making use of colored namespaces, is there any way to change the sort order by color? I'd like all displays of tags to sort my "namespaced" and "unnamespaced" tags below all the specific namespaces that I've given a color.
(91.26 KB 617x367 095026.png)

I use 125% scaling in Windows (text is just too small for my eyes on 27" 1440p). Previous versions of Hydrus ignored that, but now with QT6 it does not. However thumbnails look quite ugly now as they are scaled to 125% with no filtering. The same issue exists in the media viewer too, probably the preview pane as well. Perhaps you could make image rendering ignore the scaling if possible Fix: client.exe properties > compatibility > change high dpi settings > override high dpi scaling behavior: set to System (Enhanced) But now I have to reduce the size of thumbnails to get the size I'm used to once they scale 125%.
(319.40 KB 1451x1122 100753.png)

>>9140 But, that fix completely breaks video rendering Looks like I'm not using qt6 for the time being
>>9135 >Deleted Why would you do this anon? If you realized your complaint was in error, you should at least explain how in order to help others avoid the mistakes you made and better use the program.
>>9133 I've sped up fixing individual tags a bit with proper use the copy and partial copy functions, but I still haven't found a way to alter a namespace if want to change one.
can you make it so that if there's multiple same quality duplicates that match a search, only 1 will appear on the page each time. i got like 15 files that are same quality duplicates of each other and because of that they just keep showing up everywhere. Much more than other files so it's unbalanced.
>>9144 Why not just delete the dupes?
>>9064 No way to do 2 yet, but it is requested often and I plan it. Basically the way you rename tags in hydrus is the 'siblings' system under tags->manage siblings, which I completely overhauled last year to work with more reliable logic. The next expansion of this system will be to make it accept rules and work efficiently en masse, and for namespaces in particular for exactly the sort of renaming you want here. >>9070 Also >>9075 No. Hydrus doesn't handle multi-file collections very well yet, and it absolutely can't do 'playlists' that external programs could accept yet. In future I expect to add support here and there, and I'd like to add really nice cbr support one day, but I don't think we'll ever be as good as a program that is tuned for it like ComicRack. Feel free to experiment, but keep your manga out of hydrus for now. Same for music mp3s that you'd want to throw at foobar--hydrus isn't for that atm. >>9073 Thanks for the follow-up. This is odd, and I am not sure how is happening. There is some transitive logic in the duplicate system, so groups can merge in unexpected ways sometimes, but if you always click 'and delete the other', then there shouldn't be any groups of n>1 in your 'my files' collection. And there's no auto-dupe resolution yet, so the only way you can set dupes is in the filter or manual thumbnail commands. Unfortunately, your symptoms point to a logical problem on my part, as if the various 'media ids' here are being assigned to the wrong files. But this would probably only be feasible if the files were assigned as dupes completely randomly, as if all the dupes you find were weird false positives, like a picture of an anime babe linked up with a picture of a train. It doesn't sound like you are getting that, right? I am now afraid that the dissolve action somehow has bugs, that it isn't relinquishing ids fully sometimes, and it hasn't been noticed since the command is used so infrequently. To get even more concrete, could you walk me through the shape of one of these problems? For instance, could this have happened in your client? - Four pictures of Asuka, ABCD, all the same but different sizes - You set A>B, B deleted - You set C>D, D deleted - A and C are now somehow set as dupes Is that the sort of thing that is happening, that dupes are being set in confusing ways but they are ultimately correct? Or is this more like it: - Run duplicate filter - This picture of Asuka > that one, delete the bad one - Look up your dupes - A picture of Rei is now duped to a picture of a bus somehow I'd like to try to figure out exactly where the logic is failing here, so any specifics you can offer would help.
>>9146 >No way to do 2 yet, but it is requested often and I plan it. Awesome.
>>9075 >Why must hydrus replace a filename with the file's hash? >>9076 >>9090 >>9111 Thanks again. My general thing on the filename/hash thing is that FAQ, and I've made the fairly firm decision to do hydrus file storage as I have because of those issues. It wouldn't be technically impossible to track external files with custom filenames, but it would need a significant amount of additional maintenance and recovery code. I'd have to deal with files being renamed and changed and all that, which I have decided is outside of my scope. I'm going to try to solve the problem of managing a million imageboard style files without filenames, so anything that needs filenames is going to have to stay out or be hobbled a bit. Sorry! I used to use ACDSee for those sorts of images on my personal machine, which worked ok and had simple tagging support. I now use ImageGlass and like how quick it loads. It works fine for manga if you don't need bookmarks and other stuff that ComicRack etc... support. I agree that losing the ability to easily upload to filename threads is a shame. The Drag-and-drop filename pattern is undeveloped too. I want to rework that 'phrase' system into a proper generator object (the stuff you see under options->tag presentation for the thumbnail banners was an initial attempt at this, btw, but I never really liked how they came out). If you decide you still like hydrus and stay with it, please let me know how you like and dislike the rest of the program and the learning process in general. Keeping the help docs up to date and the UI new-user-friendly is a constant battle. >>9079 >>9080 Hmm, this generally isn't supposed to appear in any case. Normally, when a trashed file leaves the trash, all its thumbnails are removed from view. You have to do some advanced stuff to see these files normally. Could this session have been an old backup or something, loaded from before when the thumbs were deleted? Could I have made it more obvious somehow that these files were permanently deleted? I know that stuff is buried in the right-click menu labels, but something more obvious? Maybe instead of the trash-can icon, I could show a more permanent 'this is not available any more' kind of icon? >>9102 Thanks. Can you talk more about this? Was the window shrunk to a tiny size, or is it more like the window is the normal size but everything inside it is blown up huge? And maybe pixelly? I've got some reports that thumbnails are scaling bad in Qt6 if you have UI scale > 100%. Do you know your UI scale on that monitor? If you are on Windows, right-click on your desktop and hit 'display settings' and then look for the UI Scale %. Also, maybe you can post a screenshot of how it looks? Spoiler if nsfw, but I don't care the content. Blank page is also fine. You can also email it to me on DM on discord. Win+Shift+S on Windows.
>>9148 > It wouldn't be technically impossible to track external files I'm not really asking for that. The solution to the filename issue already existed within hydrus. Core problem is resolved. >autotag all imported files with their own filename under the namespace "filename:" >Rename all drag n drop exports to whatever is tagged under the namespace "filename:" The root cause of the issue was the drag n drop export option is labelled as an option for Discord, leading one to believe it's something that functions only for Dicksword, and that it's found under gui options. Though the latter might make sense and I'm just stupid for expecting a section dedicated to export options, or for export options to be paired with import options when import options are so much more robust.
>>9148 While I'm on the topic of the "filename:" namespace, would it be possible to select just this namespace to have its tags retain case sensitivity? This is a really unnecessary feature in my eyes, but it would make exported filenames look nicer.
>>9120 Yeah, check the last response here >>9055 >>9121 >>9136 Extra options for how shift and ctrl click do preview highlights under options->gui pages, and I'm moving them to thumbnails soon. Damn, thank you for the report about that 'turn off to hide' not working. I think that's just a bug, I'll fix it. >>9137 Not yet. You can only sort tags/namespaces by alpha right now. I'd like to add a forced sort system one day so you can make sure (creator then series) is always at the top etc...
>>9151 >Yeah, check the last response here >>9055 >>9055 >Should be able to save its last position too. Thanks. This helps immensely. >Damn, thank you for the report about that 'turn off to hide' not working. I think that's just a bug, I'll fix it. Ah, I actually did something productive for once. >I'd like to add a forced sort system one day so you can make sure (creator then series) is always at the top etc Would be fantastic and make it much easier to check for tags I've forgotten to add to images.
>>9140 >>9141 Thanks. I have had a couple reports like this. I can reproduce the problem on my dev machine so I will work on this. Hope to have something figured out for this week or next. Please check the later changelogs and give it another go when convenient. >>9123 Thanks. I'll keep this in mind as I rework the different import options in the near future. Some of this is tricky to make convenient UI for simply because hydrus has complex options, but I feel like the 'edit import options' dialog(s) should have some link to edit the defaults there, especially when you are set to use that default. >>9132 I can't think of anything specific here, although I like the idea of a checkbox workflow that makes sure you do the bare essentials. This sounds stupid, but could you maybe just make a search page for something like: system:archive -rating:anything OR -sex:anything OR -whatever:anything I think that says 'find anything processed that doesn't have all of these namespaces'. You can make an OR real easy these days just by clicking the 'OR' button. And to get a 'anything without this namespace' "-rating:anything", just type '-rating' and it should turn up as a special thing to select. If you are in help->advanced mode, you might also want to try the 'OR*' button, which allows very clever searches.
>>9153 >This sounds stupid, but could you maybe just make a search page for something like: >system:archive >-rating:anything OR -sex:anything OR -whatever:anything >I think that says 'find anything processed that doesn't have all of these namespaces'. This was basically my first idea for a bandaid solution, but it's an after the fact solution for once I've already missed a bunch of tags, rather than an at-the-time-of-orginal-tagging solution like the checkbox workflow you describe. Maybe in the future once more pressing issues are taken care of you could add that function, since it seems like a large addition that would require a good bit of work.
Session save confirmation dialogue box currently does not include the name of the session you are overwriting. Could you add the name of the session being overwritten to the text in the dialogue box? This provides a safeguard against overwriting a session that was misclicked when the user wasn't aware that they misclicked.
>>9133 >>9134 >>9143 It sounds like you have found the tags->manage tag siblings system, which replaces one tag at a time, but there's no support for namespace siblings (renaming 'creator:' to 'artist:' and so on) yet. But it is requested often and something I would like to do. It hope to make it the next large expansion of this system, making the current (very CPU heavy) logic work efficiently en masse and on more complicated tag replacement algebra. If you haven't seen siblings yet, check them out! Some extra help here: https://hydrusnetwork.github.io/hydrus/advanced_siblings.html >I think I might be too stupid to use hydrus. Nah, just takes practice. I over-designed all this shit and forget things about my own program all the time. >>9144 I'm not totally sure how to deal with dupes in normal searches. When I wrote the current dupe system, I was mostly concerned with getting the database structure solid, so the UI support as you've seen is almost completely sparse. I think I may do something like collections, where you can hit a checkbox and dupes will collapse into one thing, but navigating how all that UI should actually work and display and collapse/expand while still being fundamentally useful only makes me think of how much work it would need. One option you have here, btw, although it may be annoying, is adding 'system:file relationships' ('system:is the best quality file of its group') to your query. It'll filter only for kings. Give that a go, and if you like it, let me know. Maybe I can figure out a quiet 'always add this system predicate to every search' option, like the implicit system:limit. >>9149 Great, sorry again for the confusion. >>9150 Can't do it with tags, I'm afraid, as that's all locked in for lowercase. I'm working on a 'notes' expansion right now though, and I wonder if notes is somewhere we could start storing this 'rich' text metadata. Tags are for searching, after all, not describing, so when we have much nicer access to notes with better UI, system, and Client API, maybe we'll start parsing filename stuff to there instead. >>9155 Sure, thanks!
>>9148 Regarding the 720p stuff: Here's how it looks. The first two images are the Qt6 build while the last is the Qt5 which is how hydrus has always looked for me. I'm on a laptop so my ui scale is at 125% normally. Thanks for the screenshot thing, never new about that functionality on W10, I've still been using Win+PrtSc lol.
How come Hydrus doesn't see these two images as potential duplicates, even at search distance 6? Very odd. (nsfw)
This has probably been asked before, but do you plan on adding jpeg-xl support soon? If you haven't heard of it, it's a new image format that intends to be a strict upgrade and replacement for jpg, png, and gif files, by combining the features of all 3 while also having a lower filesize than any of them. It's really exciting and I already have some jxl files on my computer that I'd like to import.
Random small idea to add to the todo: skipping corrupted downloads. I've only encountered this with Sankaku, but some files stall at the exact same spot in the download and Hydrus eventually restarts downloading it. These files will always stall at the same location (as far as I can tell), and I have to manually skip them. >>9161 I think the Hydrus stance on new file types is that they're added when the libraries used for displaying images supports it. I think I remember this happening with webp too. I'm excited about JXL as well. I think I remember reading that eventually Hydrus might have a sort of "hooks" system, where you could say pass an image through Waifu2x from inside Hydrus and have the new image replace the old one. I'm assuming metadata about the original would be saved somewhere, so you could still copy the original's hash and such if you wanted. If such a system were implemented, I'd love to convert all of my JPEGs at least to JXL since you can losslessly convert them. >>8151 Dev, let me know if I was just dreaming about the above. I think I read something like that in one of these threads, but could be making it up.
the nitter downloader doesn't grab any tags at all for me even though it looks like it's supposed at least grab creator and title tags when I took a look at the parser.
It'd be cool if you could add web domains to the duplicate filter's scoring system and give each of the domains you add a positive or negative score. I notice that certain domains typically give higher quality images and other domains often give lower quality ones. It'd be cool if we could give that info to the duplicate filter to help the A/B pairs be more accurate.
For tags I haven't thought of a good namespace for, I'm currently using the namespace "related:", so when I feel like coming up with namespaces later, I can check all these tags appearing as results for "related:*". Is this necessary autism on my part, or is there a way to display all unnamespaced tags?
>>9156 >I'm working on a 'notes' expansion right now though, and I wonder if notes is somewhere we could start storing this 'rich' text metadata. I saw this smallfont filename moments ago and creamed my pants at how nicely it enhanced the joke. I eagerly await notes storing rich text data for filename exporting.
>>9146 >dupes It's certainly not the second option, I'm not having false positives, they're all correct duplicates that only differ in size or compression. But I don't think it's the ABCD issue either, I'm just having different pairs of images. Some are caught in the dupe filter, others only show in the search. My actions in the filter don't seem to affect what ends up in the search. I just don't get why the filter ignores the latter until I dissolve them so they could be analyzed again. I think the course of action here is as follows: As I mentioned, I made a parser using the API of a facebook-like social network site. There are albums of images and then there is the "wall" (I don't know if it's the proper term, like the main feed of a club) where those images are posted, all from the albums. So I have a subscription that pulls new images from the albums, and another one that checks the wall to mark the images that were already posted (it's set up to add a specific tag to all images from the wall). Now, images are the same, but the ones from the albums are older and sometimes use a different compression algo, thus often Hydrus redownloads the copy from the wall instead of just assigning a new tag to an existing pic. That's okay with me, since they usually end up in the dupe finder. But sometimes they don't and I have to check the search expression and manually dissolve them so I can use the dupe filter on them. I've yet to see any pattern in which images go where. I think that maybe I did something weird with the "associate and trust the source urls" or something like that, the parsing system I created is super complicated and I'm not even sure how it works anymore. Some images end up with 5-10 file urls that point to the "same" image, but use different parameters and compression, since the downloader runs once a day and requests a whole page of images, often visiting the same pics multiple times. But the urls shouldn't affect the dupe filter in any way, right?
(60.17 KB 527x618 1554128234016.jpg)

It seems like I fucked something up somewhere. The search tag for "system:everything" had disappeared on me a while ago and since I still see people talking about it, I can only figure I messed something up so it no longer appears. Also up to this point, I've been manually tagging thousands of my files by hand with just the tags I care about, but I'm kind of done with that, so I have hundreds of tags I'd like to normalize with the booru ones. I assume there's no way to change tags with underscores to spaces, right? I've only found the option that will display spaces, but not change the actual tag. Another thing is that I use "artist" tags in my db and I noticed that boorus (despite having it labeled as "artist") are importing in the "creator" namespace. Is there a way to have them imported as "artist" instead or change all mine to "creator"?
>>9170 It automatically gets hidden if you have over 10,000 assets, you can reenable it in Settings -> Search.
>>9171 Is 10,000 arbitrary, or is it around there when Hydrus starts struggling?
I have downloaders that import files from artists I like on kemono. Sometimes the files it downloads are zip files, so what I do with those is I extract the zip files' contents to a desktop folder then delete the zip file from hydrus, and i import those extracted files at some later time from that folder. The problem with this is that the zip files had a filename that I want to save. The name is added as a tag to the zip file itself like I have it set to do with all files, but I don't keep the zip files because they aren't really usable so I keep the contents and delete the zip file. Is there any way to have the zip file's filename not be lost and somehow be added as a tag to the content files that I import, maybe under a namespace like "archive file name:" or something. there probably isn't any way to keep the names unless hydrus had some "auto-extract" and import feature for archive files that it could then apply some tag logic to, but it sucks that i'm just losing that info, especially since files that I import from zips that get downloaded from hydrus don't have any tags at all because the downloader didn't download those files directly. It only downloaded the zips that contain them. so I don't even get creator tags or title tags automatically from those.
>>9173 I've run across many pictures that other people have tagged with "zipname:original_zip_filename" You could do that as well by adding the tag while importing. Easiest to do when you import the contents of only one zip file at a time.
is there a way to make that "discord" drag and drop option that you guys are talking about work for groups of files over that 200mb limit at a time and for more than 25 files at a time? it's okay if it's slower. It'll still be quicker than me dragging and dropping each file one at a time.
Any idea why my search query won't find any new content? The sankaku tag has millions of images I haven't downloaded. In the search log, it says "0 new urls found (20 of page already in).
>>9179 >Any idea why my search query won't find any new content? The sankaku tag has millions of images I haven't downloaded. forgot to mention this is a gallery downloader page


Forms
Delete
Report
Quick Reply