This is a huge benefit because honestly all that image downloading is what most often breaks scrapers trying to scrape a large library. All nfo files do is remove a bottleneck by putting a needed task elsewhere in the chain.Īnother piece of this we aren't even mentioning is how the software not only creates an nfo file, but it also downloads locally all the fanart associated with the tv show. That is a huge advantage, and makes it the only solution out there that can handle a large library. With nfo files basically the time spent scraping is shifted from when you tell the software to update the library to when the file is acquired. This sucks if you just got a lot of content recently and you want to update it all at once. The Plex scraper has the same limitation any scraper does- aka it wants to scrape directly to its library all at once. Plex is a fork of XBMC, and from what I understand a lot of that scraper code is what XBMC had when the fork happened. My setup basically puts the media all on a platter for Kodi. I have a program that gets the media file, renames it to the proper naming convention, scrapes the media information to a nfo file, and then puts it in the proper place on my server. None of it is manual, it is all automatic. But with nfo files you only have to do that process once- when its most convenient to do it- and not every time you want to redo your library. You can't get around an initial scrape to get your information. The media program will ALWAYS be able to extract the needed information from it because it will always be formatted in the same way. Nfo files on the other hand are like having a xml settings file for a program. That means if the website changes the formatting your scraper is broken because it is looking in the wrong place for the info as it lacks the intelligence a human has to look elsewhere in the page. It is a piece of software that goes to a web page designed for human consumption and tries to copy that information off like a human would. With nfo files I already downloaded all that information locally ONCE when I added that file to the server so I can put it all together in a library in a fraction of the time. So if I ever need to rebuild my library in the future (which happens) without nfo files I have to go and rescrape EVERYTHING off the TVDB right then and there. With nfo files it simply scans though the file system for a single file type, finds those files, and then extracts the media information from them. Without nfo files any time I want to update my library the scraper has to go through and check all the media files themselves (that have to be named perfect) and then gets the needed information off a website. Unfortunately I think personal trial and error is often the only answer. I know what works for me, but maybe someone else consumes the media in a different way. Each case is different, each priority set is different. That is why it is so difficult to recommend these options blindly to people. For that optimal local GUI performance I sacrifice features, like never having my remote access library ever be perfectly the same as my local library, but there is no solution that can be everything for everyone. My MySQL server that runs my library eats ram like a pig, but I get optimal GUI performance. Personally I hit a point where my library is so big that NONE of the major solutions work "out of the box." My options were to either deal with usage hacks (like the scan the show root trick) or throw enough extra power and software at the scalable open source solution to make it work. He has the lifetime pass, he uses it all over the place. I have a friend that SWEARS by Plex with everything. What solution works best for you depends completely on your scale. Whatever the source webserver is doesn't get hammered, and you never have to get that info again if you have to rebuild the library. With nfo files you get the info when the file downloads/rips/records one at a time instead of all at once. If nothing else they fail because the web server starts rejecting requests after a while when you slam it all at once like that. Scrapers by definition are a bit of a hack, and I never have met one that can take a 300+ show library and not quit every now and then. If you check the logs the reason probably is the scraper is failing somewhere in your huge library. Like Aikouka mentioning that scrapers work better from the show root page- I have had the same experience in multiple pieces of software. Doesn't matter the software, the scrapers are all very similar in their benefits and weaknesses. The problem with scrapers is that eventually your library gets so big that the scrapers have trouble with it.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |