CDs

xrayspx's picture

DVD Ripper

Download the dvdrip bash script

This is the correct way (for me) to rip hundreds of DVDs. I still wish there was a global hash table of discs whereby we could automatically name individual files, but this does the job and I'll describe my overall workflow. Ripping TV shows is stupidly time consuming compared to audio CDs and I've done everything I can to reduce the time wastery involved. It's not perfect, but I can just feed disks through my machine all day then take an hour or so a week and rename everything I've done.

xrayspx's picture

That 120 Minutes Playlist

Music: 

I've just been directed toward a YouTube playlist that apparently made the rounds last August claiming to have "Every Video Played on 120 Minutes".  Well no.  Not really.  The claim is "2506 Videos".  Reality is...less.

 

I grabbed the playlist and threw it into my nightly randomizing grinder.  I already have a "120 Minutes" playlist, in which I just cram every video from every band who was ever on 120 Minutes in there.  Since it all "spiritually" counts.  I put stuff that "should" be in there too because what other slot would have played Humanwine I guess?

xrayspx's picture

Playlists

Music: 

Dr. Dre - Nuthin' But a G' Thang

I had a request to share some playlist management stuff so I thought I should explain myself. I've got a significant CD collection, and a somewhat-significant collection of TV shows. This is fine on its own, but lots of media is pretty worthless without well curated playlists that you really don't have to think about. So I built Spotify, MTV and Syndicated TV.

* NOTE: If you have a better way to do any of this let me know and I'll fix it. I particularly have the sense, which is not backed up by my testing, that "sort -R" isn't great.

Music's easier so we'll start there. I use Strawberry to manage my music. This was all running under Clementine and aside from some DB schema changes, the scripts are portable between them.

Until relatively recently I was never a big fan of "star" or "heart" ratings, but Clementine/Strawberry will store this metadata in the MP3 itself so I should be able to quickly recover if I lose my music database. In the app I have a few Smart Playlists like 3-Stars, 3 Stars + (This is 3, 4 and 5 star tracks), 4-Stars, 4-Star + and 5 Stars. To use 4 Star as an example, the rules look like this:

Match every search term (AND)
Rating - Greater than - 3.5 Stars
Rathing - Less than - 5 Stars
Ratin - Not Equals - 5 Stars
Length - Greater Than - 8 Seconds

That results in a playlist of 8423 songs with ratings between 4 and 4.99 stars. There was a bug in Clementine which I got fixed where ratings could exceed 5, so I'm a little careful to deal with weirdo cases, but it's pretty simple. I also have a bunch of manually selected playlists, so like an '80s one, '90s, and "Barn Radio". Barn Radio is our catch-all for the ubiquitous music we heard from the late '70s through late '80s. For Natalie that was largely with her dad in the dairy barn, for me it was the music of my 2 hours on the bus every day.

Anyway, I have all these .m3us stored in a folder along with my MP3s called "playlists_base". These are used by a nightly playlist generator that pulls ~200 tracks and makes daily playlists running 8 or 10 hours each. The reason for this is that streaming software such as Airsonic-Advanced kind of chokes on massive playlists. It could be Airsonic itself, it could be populating the mobile client, I don't really know or care, other than to say it works great with list sizes under about 1000 tracks or so, so I keep them shorter.

The x-Star playlists are all built from the database like this 4 Star + playlist below. You can see it do a couple of different Star Rating DB queries, dump out the tracks to $playlist_tmp.m3u, then cat that file and do a random sort to generate the final version. It's pretty easy to adjust the mix based on ratings, so if I wanted to weight high-rated tracks I could do that by adjusting how many tracks of the 200 are returned by each search:


#!/bin/bash

rm /Volumes/Filestore/CDs/playlists/4\ Stars\ +.m3u

i=1

while [ $i -le 100 ]
do

### Switching from Clementine to Strawberry ###
#       file=$(sqlite3 /var/tmp/clementine.db "select filename from songs where rating > "0.9" order by random() limit 1;" | awk -F "file://" '{print $2}')
        file=$(sqlite3 /var/tmp/strawberry.db "select url from songs where rating > "0.9" order by random() limit 1;" | awk -F "file://" '{print $2}')

        ### Clementine data encodes special characters and accent marks and stuff so I'm using
        ### Joel Parker Henderson's urldecode.sh to undo that: https://gist.github.com/cdown/1163649

        data=$(/home/xrayspx/bin/urldecode.sh "$file")
        if [ -f "$data" ]
        then
                ### Have to escape leading brackets because grep treated it as a range and would allow duplicates ###
                ### Can't do that in "data" because \[ isn't in the filename so they'll fail ###

                escaped=$(echo "$data" | sed 's/\[/\\[/g')
                #echo "$escaped"

                ### Avoid duplicates
                match=$(grep -i "$escaped" /var/tmp/4-star-tmp.m3u)
                if [ -z "$match" ]
                then
                        echo "$data" >> /var/tmp/4-star-tmp.m3u
                        ((i++))
                fi
        fi
done

i=1

while [ $i -le 100 ]
do
### Switching from Clementine to Strawberry ###
#        file=$(sqlite3 /var/tmp/clementine.db "select filename from songs where rating >= "0.8" and rating          file=$(sqlite3 /var/tmp/strawberry.db "select url from songs where rating >= "0.8" and rating 

        ### Clementine data encodes special characters and accent marks and stuff so I'm using
        ### Joel Parker Henderson's urldecode.sh to undo that: https://gist.github.com/cdown/1163649

        data=$(/home/xrayspx/bin/urldecode.sh "$file")
        if [ -f "$data" ]
        then
                ### Have to escape leading brackets because grep treated it as a range and would allow duplicates ###
                ### Can't do that in "data" because \[ isn't in the filename so they'll fail ###

                escaped=$(echo "$data" | sed 's/\[/\\[/g')
                #echo "$escaped"

                ### Avoid duplicates
                match=$(grep -i "$escaped" /var/tmp/4-star-tmp.m3u)
                if [ -z "$match" ]
                then
                        echo "$data" >> /var/tmp/4-star-tmp.m3u
                        ((i++))
                fi
        fi
done

cat /var/tmp/4-star-tmp.m3u | sort -R > /Volumes/Filestore/CDs/playlists/4\ Stars\ +.m3u

rm /var/tmp/4-star-tmp.m3u

Those Star Rating lists are called at the beginning of my overall static playlist script, but the Barn playlist and other manually selected ones are built from the "playlists_base" directory. I basically just edit those .m3us in place with Strawberry as we add CDs. They just the files, do a random sort and pull the top 200. This will use any .m3u in .../playlists_base/ and make a daily file from it:


#!/bin/bash

#scp xrayspx@pro:~/.config/Clementine/clementine.db /var/tmp/

### Switching between Clementine and Strawberry ###
#cp /Volumes/Filestore/CDs/playlists_base/clementine.db /var/tmp/

cp /Volumes/Filestore/CDs/playlists_base/strawberry.db /var/tmp/

/home/xrayspx/bin/3-star-playlist.sh
/home/xrayspx/bin/4-star-playlist.sh
/home/xrayspx/bin/5-star-playlist.sh
/home/xrayspx/bin/get-the-led-out.sh

ls /Volumes/Filestore/CDs/playlists_base/*.m3u > /Volumes/Filestore/CDs/playlists_base/m3us.txt

while IFS= read -r file
do

        filename=$(echo $file | awk -F "/Volumes/Filestore/CDs/playlists_base/" '{print $2}')

        echo Filename: $file

        rm "$file.full"
        rm "$file.scratch"
        rm "/Volumes/Filestore/CDs/playlists/$filename"

        ###Testing a change since Strawberry creates playlists without EXTINF lines ###
#        array=`grep EXTINF "$file" | sort | uniq`
        array=`grep -v EXTINF "$file" | sort | uniq`

        printf '%s\n' "${array[@]}" | sort -R > "$file.full"
        head -n 200 "$file.full" > "/Volumes/Filestore/CDs/playlists_base/$filename.scratch"

        n=0
        while IFS= read -r extinfo
        do
#       echo $extinfo
                term=`echo $extinfo` # | cut -d "," -f 2-`
#       echo $term

 ###Testing a change since Strawberry creates playlists without EXTINF lines ###
 # grep -A 1 -m 1 "$term" "$file" >> "/Volumes/Filestore/CDs/playlists/$filename"

        grep -m 1 "$term" "$file" >> "/Volumes/Filestore/CDs/playlists/$filename"
        done 

        rm "$file.full"
        rm "$file.scratch"

done 

rm /var/tmp/clementine.db
rm /var/tmp/strawberry.db

For TV shows it's a bit more complicated. I've got individual scripts for things like Sitcoms, Saturday Morning Cartoons, Buddy-Cop shows, Nick-at-Nite, etc. Each script uses a text file which just lists the relative path to the directories I want to randomize. I just read in that text file then scan each directory and build an array that again I sort -R and dump in an m3u. You'll see a couple of my conventions here, like the "dvd_extras" folders I use for any extras that I want to keep but don't want to have show up in the mix, as well as a bunch of other crap I grep out.

This script references "./.sitcoms.txt", which looks like this:


./Archer (2009)
./30 Rock
./Absolutely Fabulous
./Alexei Sayle's Stuff


#! /bin/bash

array=$(
while read line
do
        find "$line" -type f;
done < .sitcoms.txt
)

printf '%s\n' "${array[@]}" | sort -R | grep -v -w "batch" | grep -v dvd_extras | grep -v "./$" | grep -v "\.m3u" | grep -v -i ds_store |
 grep -v "\.nzb" | grep -v "\.nfo" | grep -v "\.sub" | grep -v "\.sfv" | grep -v "\.srt" | grep -v -i "\.ifo" | grep -v -i "\.idx" |
 sed 's/^/..\//' > ./1\ -\ Playlists/Sitcoms.m3u

This dumps out to a folder called "1 - Playlists" inside my TV Shows directory, just so it shows up first. There's a folder in there for Blocks as well, in which I create blocks of 10 random episodes of a bunch of shows. This is built to replicate like TBS/TNT/USA in the evening where you just sit and watch a block of whatever is on. In practice I do this wrong and tend to be too picky about these and just watch blocks until I've worked my way through a whole series and wind up tired of it forever.

One thing I do for things like Nick at Nite and overall Sitcom lists and stuff is that I mix in commercials. I don't do this very well though, I just treat my directory of commercials like any other TV show. I'd rather do "pull a TV show, toss in two commercials, repeat", but I'm not there yet I guess.

The last type of lists I build are for music videos. I break this into a few different playlists, one overall catchall that pulls in all videos, a playlist for MTV 120 Minutes, and one for "Arcade / Pizzeria" music. Basically the ubiquitous music you'd hear in a pizza shop or arcade in the '80s or '90s. I do the same commercial thing here as well.

Example:


#! /bin/bash

array=`find ../120\ Minutes -type f;
find ../../../Commercials -type f`

printf '%s\n' "${array[@]}" | sort -R | grep -v dvd_extras | grep -v "./$" | grep -v "ERRORS$" | grep -v "\.sh" | grep -v "\.m3u" |
 grep -v -i ds_store | grep -v ".nzb" | grep -v ".srt" > 120\ Minutes.m3u

xrayspx's picture

Rippin' DVDs

Music: 

Dana Carvey - Choppin' Broccoli

Today in Lattice of Convenience news, here's how to rip DVDs.

I barely understand the mencoder command that is the backbone of this thing, and there are many better ways to do lots of the stuff in this script, in fact I know several of those better ways, and looking at it fresh, I see some redundant stuff that cancels out other stuff. But it runs, and I use it, so here goes.

Ripping DVDs isn't fun, the disk labels are iffy at best, even within a single box set you might go from the Gold Standard "TV Show - S1D1" to "DVD_VIDEO" as a disk label. So it can get kind of ugly. To mitigate that I create an output folder based on the DVD disk label + a timestamp. If you get a run of disks with the same name, at least they're not overwriting each others files because the timestamp will shift. I currently have a dvdrip-output directory with the following DVDs in it:

...
DVD_VIDEO-090720202337
DVD_VIDEO-090820201025
DVD_VIDEO-090820201027
DVD_VIDEO-090820201142
I_LOVE_LUCY_S2_D1-090520202354
I_LOVE_LUCY_S2_D3-090620201047
LUCY_S1D1-090520201043
LUCY_S1D2-090520201043
LUCY_S1D3-090520201359
...

Those are all from the same box set. So that's 3 naming conventions from one series. To be fair I think that while it's the same company producing them they probably came as separate "season" boxes rather than one big set. Still. Come on. Jesus.

Another big gotcha I've hit, again mainly with TV series box sets, a single show might exist on the disk as many as THREE times. Once as a "standalone episode", once as "episode with commentary track" and once as part of a massive concatenated file of all the episodes on that disk. In the case of the commentary track, that audio seems to be separate, so the actual episode rips to exactly the same filesize, the commentary track seems not to be something I have access to, so you just get two identical files at the end.

So as you're ripping, that's going to triple the rip time.

The way I'm trying to fix that is to rip the first 30 seconds of every Title on the disk, then do a SHA sum on those ripped sample files. As a Title rips, when it's done I'll drop its clip checksum into a "rippedchecksums" file. The next TItle starts the first thing it does is check to see if its checksum has already been ripped. If it has, skip it. It seems to catch 100% of repeated Titles, and probably 70% of the "Big Concatenated File" cases will match the sum for Title 1. Saves a shitload of time.

In this case, Title 1 is a standalone episode, and Title 21 is the Big Concatenated File of all the episodes on the disk. Title 21 will be skipped. Since I get about 70 or 80 FPS on my Mac Pro, that probably saved 90 minutes of rip time or so with 3 hours of video on the disk:

763b6035c4bf239b4425fb8f484018387574baca /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/1-sample.avi
59cca1b18759647e13e3e1b6a4facace0520fc06 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/10-sample.avi
125add4181b9dc6eee57c32c07568765b8e4483b /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/11-sample.avi
4daae35d014032964fe57e70e2cc3450f7dac4e5 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/12-sample.avi
a942f31a9ee42c5839772f733b2c666195397ad5 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/13-sample.avi
8c9473a940a9bc685d84e0ac29c66f53efa6667d /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/14-sample.avi
29d2200d8c46ac11417119b4b7179e4b526d99cf /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/15-sample.avi
466860b79bba6d132fcc97d6dc7c0c3a20dd771c /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/16-sample.avi
f4ae11cca0752956c4d6025a8760a260a59fe79b /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/17-sample.avi
00753d529f4bbf4081f647056cf44db7c630c198 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/18-sample.avi
b7f9c9087fed6b00d22de5033c153f9ffb3cd3b1 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/19-sample.avi
14efcb6164f1424b894cc28200ab621ec805ecd0 /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/2-sample.avi
6c411c8869f1e6bc9a6ec298ba9b6a5c9eefc9ae /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/20-sample.avi
763b6035c4bf239b4425fb8f484018387574baca /Volumes/Filestore/dvdrip-output/DVD_VIDEO-090720202337/21-sample.avi

At the end of it, I still end up with just a directory full of files labeled 1 through whatever.avi. I have to take a few seconds per file to get it to "TV Show - S01E01.avi". But from there FileBot can mass-rename them with episode titles.

So here's the full ugliness. You'll want to adjust all the paths. I should have made variables, but I don't care, I maybe have 3 or 4 ripping trays running at a time on various machines, so I don't mind just changing the paths for each host. Works on OSX and Linux, and probably Windows with Cygwin, but I don't care about Windows so I'm not going to test it.


#! /bin/bash

timestamp=`date +%m%d%Y%H%M`

id=$(drutil status |grep -m1 -o '/dev/disk[0-9]*')

if [ -z "$id" ]; then
echo "No Media Inserted"
else
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`

fi
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`
echo $name
dir="$name-$timestamp"
mkdir /Volumes/Filestore/dvdrip-output/$dir

maxtitle=`/Applications/mencoder dvd://100 -o bob | grep "titles on this DVD" | awk '{print $3}'`

for title in {1..100}
do
if [ $title -le $maxtitle ]
then
/Applications/mencoder dvd://$title -alang en -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate="1200" -vf scale -zoom -xy 720 -oac mp3lame -lameopts br=128 -endpos 30 -o /Volumes/Filestore/dvdrip-output/$dir/$title-sample.avi
shasum /Volumes/Filestore/dvdrip-output/$dir/$title-sample.avi > /Volumes/Filestore/dvdrip-output/$dir/$title-checksum
touch /Volumes/Filestore/dvdrip-output/$dir/rippedchecksums.txt
fi
done

cat /Volumes/Filestore/dvdrip-output/$dir/*checksum >> /Volumes/Filestore/dvdrip-output/$dir/allchecksums.txt

for title in {1..100}
do
if [ $title -gt $maxtitle ]
then
chmod -R 775 /Volumes/Filestore/dvdrip-output/$dir
sleep 3
drutil tray eject
exit 0
fi
sum=`cat /Volumes/Filestore/dvdrip-output/$dir/$title-checksum | awk '{print $1}'`
match=`grep $sum /Volumes/Filestore/dvdrip-output/$dir/rippedchecksums.txt`
if [ -z $match ]
then
echo "CURRENTLY RIPPING TITLE #$title"
/Applications/mencoder dvd://$title -alang en -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate="1200" -vf scale -zoom -xy 720 -oac mp3lame -lameopts br=128 -o /Volumes/Filestore/dvdrip-output/$dir/$title.avi
echo $sum >> /Volumes/Filestore/dvdrip-output/$dir/rippedchecksums.txt
rm /Volumes/Filestore/dvdrip-output/$dir/$title-checksum
rm /Volumes/Filestore/dvdrip-output/$dir/$title-sample.avi
fi
done
chmod -R 775 /Volumes/Filestore/dvdrip-output/$dir

xrayspx's picture

Music Video Sorting?

Music: 

Teddybears ft. Robyn - Cobra Style

Anyone have any Deep Thoughts about how videos should be categorized? If not, skip it, this is really that boring.

--

Let's say for arguments sake that I'm building a playlist of
videos from 120 minutes (Like say from this comprehensive list right
here.

I've already decided that any band that gets one of their songs on 120 Minutes one time gets all of their songs in this folder. Because I don't want to have 3 different places where I can find songs of one band. It gets unruly. The only exception to this is the "Arcade Pizza" folder. These are songs that ubiquitous on the radio when I was a kid, especially in arcades and pizzerias of the '80s and '90s. For that case I have /videos/Arcade Pizza, as well as /videos/120 Minutes/Arcade Pizza.

Question is, should I only put stuff that appeared on the actual show, or should I put bands that /should/ have been on 120 minutes, but weren't, because MTV could show neither the full name of the band nor the full name of the song involved?

Or what if they're too new, like this video philosophically belongs to 120 Minutes, but it's only a year and a half old:

I think they should go in, but I'm holding off. Teddybears would have been HEAVY ROTATION on 120 minutes if they'd existed then.

Should I kick Evan Dando out because he spoiled my Juliana? These are questions that require fucking answers.

I'm nearing 3000 music vids now, so these things are starting to become problems I have to think about. I need to nip this shit in the bud before I have 20,000 videos and no damn plan at all.

The Bonus Question is: Do I change the name of the Youtube video to fit a rational style, or leave it alone? For instance:

I Was A Teenage Zombie (2016) [heHh9EIlAbw].mp4

Should be renamed to:

The Fleshtones - I Was A Teenage Zombie (2016) [heHh9EIlAbw].mp4

The "[heHh9EIlAbw]" is the only actually important part of that filename anyway, since that's the video ID on Youtube, so it'll be youtube.com?v=heHh9EIlAbw. That is there for pattern matching, so I think that makes it OK to rename shit.

Right?

xrayspx's picture

Running the Lattice of Convenience

Music: 

New Order - 5 8 6

Since posting about the week of 1983 TV Guide viewing, I've had questions from some people wondering about the storage and other hardware and software we use for our media library. It's really not very complicated to do, though I do have preferences and recommendations.

So here's what we've got.

Motivation:

Mainly I don't like the level of control streaming companies have. That they monitor everything we do, and that stuff comes and goes from services like Netflix and Amazon Prime on their timeline, not mine. I don't like the concept of paying for things like Spotify so that I can rent access to music I already own.

I realized like 15 years ago that while we often spent $200/$300 per week on CDs earlier in our marriage, Natalie and I were drifting away from actually listening to it much, because who wants to dig around for a CD to hear one song, then move to another CD. Ultimately, the same applies to movies, we have lots of DVDs, and I don't want to have to dig through booklets just to watch a couple of James Bond movies.

It's super easy to maintain, and we like being able to watch Saturday morning cartoons, "Nick-at-Nite" or throw on music videos while we play arcade games and eat pizza. Once up and running, it's all pretty much push-button access to all the media we like.

Media:

- 2000-2500 CDs (Maybe 200GB of music)

- Couple hundred movies, really probably not as many as most people.

- Lots of TV shows. Space-wise, this is where it adds up fast when you're ripping a box-set of 10 seasons of some show.

- Commercials, mainly from the '80s and '90s, but I'll grab anything fun that strikes us.

- Music videos. We have an overall collection of around 2000, and a subgroup of about 700 which represent "'80s arcade or pizza place" music. That's music that was just ubiquitous when we were growing up in the '80s and early '90s, and you heard it all the time whether you liked it or not. I've since come to appreciate these songs and bands in a way I didn't when I was a dickhead punk kid.

So all told, there's about a 5TB library of stuff, mainly TV shows, but also a decent music library that needs to get maintained and served.

Hardware:

- Ripping machines - Mainly, all I need is the maximum number of DVD trays I can get my hands on. There's nothing special here. My tools work on Mac or Linux so I can work wherever. We have one main Mac Pro that has 2x 8TB drives mirrored which hold the master copy of the media collection.

- NAS - Seagate GoFlex Home from like 10 years ago. I think I originally bought this with a 1TB drive, and have since upgraded it twice, which is kind of a massive pain. Now it's got an 8TB drive which has a copy of the media library from our main machine. I'll get into the pros and cons of this thing below.

- Raspberry Pi - I have a multi-use RaspberryPi which does various tasks to make things convenient and optimizing TV viewing. There are a handful of scripts which create random playlists every night for various categories of music videos, TV shows (Sitcoms, 'BritBox', 'Nick-at-Nite'), etc. It also runs mt-daapd, which I'll get into below.

- Amazon Fire Sticks - We have a couple of them. I'm not super impressed with their 8GB storage limit, but I'm definitely happy enough for the money they cost. They're cheap, around $20 now, and they do what they say on the box. Play video. I have side-loaded Kodi 17.x, but they seem not to quite have the resources for 18.x, though I'm really not sure why not. It's just slower.

- The Shitphone Army - I've got obsolete phones (Samsung Galaxy S4-ish) around the house and decent speakers set up so we can have music playing while doing the dishes for example.

Software:

- Kodi - I mentioned Kodi, which is just an excellent Free Software media library manager. Kodi gets /such/ a bad rap because of all the malware infected pirate boxes for sale, but you never see much from people who actually use it to manage a locally stored library of media they own. Can't recommend it enough. Get familiar with customizing menus in Kodi and making home-screen buttons linking directly to playlists. It's worth it and makes it look nice and easy to use.

- mt-daapd - I'm running out of patience with music streaming, though everything does work right now. MT-Daapd just basically serves up a library of music using the DAAP protocol, which used to be used by iTunes

- DAAP (Android app) - This could be great, but it seems to be completely un-maintained, and somewhat recently moved from being open source to closed, so unless I have an off-line copy of the source, there go my dreams of updating it. But it works well on the Shitphone Army and on the road so we can basically stream from anywhere. Other DAAP players for Android are pretty much all paid applications, and none of them seem to work better particularly than DAAP.

- Scripts A handful of poorly written scripts for ripping DVDs and maintenance of the library (below)

Recommendations:

Players - While the Fire Sticks work great, they're really very dependent on having constant access to Amazon. Were I installing mainly a Kodi machine, it would be much better to use a Raspberry Pi either with a direct-connected drive or mounting a network share. It's super easy to set up with ready-to-go disk images which boot straight into Kodi.

Playlists - Create lots of playlists. Playlists and randomizing things are two things that Kodi is terrible at, so I don't try to make it do it. These scripts run nightly on the Raspberry Pi and make .M3Us for us.

Filenames - Have a good naming convention. All my playlists are M3Us of just lists of files. That means that you don't get Kodi's metadata database with the pretty titles and descriptions, and so the files must be named descriptively enough that you can tell what episode you're looking at from the list of filenames. My template is "Name of the Show - S02E25 - Title of the Episode". Kodi's scrapers work well with that format and it makes it easy enough to fire up the Nick-at-Nite playlist and decide where to jump in.

At various times, I've considered parsing a copy of the Kodi database to suck out the metadata and add it in before the file location. In an M3U, that looks like this:

#EXTINF:185,Ian Dury & The Blockheads - There Ain't Half Been Some Clever Bastards
/mnt/eSata/filestore/CDs/Ian Dury & The Blockheads/Ian Dury And The Blockheads The Best Of Sex & Drugs & Rock & Roll/17 There Ain't Half Been Some Clever Bastards.mp3

It seems like having all that sqlite stuff happening would add a lot of overhead to generating playlists, and having well-named files saves me from having to worry about it, so I haven't bothered.

Storage - Though I use a "Home NAS" product that overall I've been pretty happy with, it does irritate me. Consumer market stuff is /so/ proprietary that it's quite hard to just get to the Linux system beneath and customize it the way you see fit. Specifically in the case of the GoFlex, "rooting" it even involved replacing Seagate's customized version of SSH with a vanilla one. Screw that up and you brick the device. I also run into network bottleneck issues with that thing. While you can enable jumbo frames, for instance, when syncing new content the CPU gets pegged, I believe I'm running out of network or disk buffer, which is kind of unacceptable in a NAS device.

Building it today, I'd just use a Raspberry Pi 3 with a USB drive enclosure. For the time being, my growth curve is still (barely) pacing along with the largest "reasonably priced" drives on the market. My ceiling is about $200 per drive when I do upgrades, because I am a very cheap man.

I have no opinion on consumer RAID arrays. I can only imagine consumer RAID based NASs come with all the shit I hate about the GoFlex. Yes, I'm biased against consumer grade garbage tech and that's probably not going to change. I'll have to buy one someday I'm sure, but for now it's all being kept simple.

Backups Keep backups. While I have multiple copies of everything, it does make me somewhat nervous that the only part of the media library currently being backed up off-site is the MP3 collection. That's got to change, and rsync is your friend. Ultimately I'll probably end up upgrading my home Internet from 20Mb/2Mb to something which will allow me to sync over a VPN tunnel to somewhere off-site (friend's house, work...).

Sample Scripts:

Here are some samples of the shitty bash scripts that run this whole nonsense. I know the better ways to write these, but the fastest possible way to hammer these out worked well enough and there's no way I'm going to bother going back and fixing them to be honest.

Rip CDs

I use an application called MAX on the Mac to rip CDs. I think its usefulness might be coming to an end, and I'm not sure what to do about that. It uses (used?) MusicBrainz database to automatically fingerprint and tag discs, but the last CD I ripped it seemed to have problems. You can run iTunes side by side with Max and drag the metadata over from there, so maybe that works well enough?

Anyway, I use that because I rip to both 320k CBR MP3 and FLAC. I have a shitload of stuff that really should be re-ripped since they're 128k and no FLAC, but I've so far been unmotivated to do so.

I wrote a bunch of stuff to move all the output files around and update iTunes libraries. Honestly I don't rip a whole lot of new music, which is a shame and which I should really fix.

Rip DVDs

DVD ripping is a lot more fragile than it should be. Good software like Handbrake are bullied into removing the ability to rip protected DVDs, and things are being pushed toward the commercial. I use mencoder in the script below.

DVD titles are sketchy at best, and as far as I know, you can't really fingerprint a DVD and scrape titles in the way you can with CDs. So I do what I can. I take whatever title the DVD presents and make an output directory based on that name plus a timestamp. That way if you're doing a whole box set and all the DVD titles are the same they're at least writing out to separate directories and not overwriting each other.

As far as file-naming, unfortuantely we don't live in the future yet and that's all down to manually renaming each output file. I use the information from TVDB, not IMDB, since that's the default library used by Kodi's scrapers. Sometimes the order of things is different between that and IMDB (production order vs airing order vs DVD order issues plague this whole enterprise).

#! /bin/bash

timestamp=`date +%m%d%Y%H%M`
pid="$$"
caffeinate -w $pid

id=$(drutil status |grep -m1 -o '/dev/disk[0-9]*')
if [ -z "$id" ]; then
echo "No Media Inserted"
else
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`

fi
name=`df | grep "$id" |grep -o /Volumes.* | awk -F "Volumes\/" '{print $2}' | sed 's/ /_/g'`
echo $name
dir="$name-$timestamp"
mkdir /Volumes/Filestore/dvdrip-output/$dir

echo $dir

for title in {1..100}
do
/Applications/mencoder dvd://$title -alang en -ovc lavc -lavcopts vcodec=mpeg4:vhq:vbitrate="1200" -vf scale -zoom -xy 640 -oac mp3lame -lameopts br=128 -o /Volumes/Filestore/dvdrip-output/$dir/$title.avi
done
chmod -R 775 /Volumes/Filestore/dvdrip-output/$dir

Playlist Script

The simplest Music Videos one below just looks at one directory of videos and one directory of TV commercials and randomizes all the content into an M3U. The more complicated ones have dozens of directories, and I'm sure I'm doing this array-building the wrong way. I'm sure I could have a text file with the un-escaped directory names I want and read that to build the array, either way, it really doesn't matter because if I want to add a TV series, I still have to edit a file, so this works fine. I've also thought about having a file in each directory like ".tags" that I search for terms in, like "comedy,nickatnite,british" and build the array from that, I dunno, sounds like work.

#! /bin/bash

array=`find ./ -type f;
find ../../Commercials -type f`

printf '%s\n' "${array[@]}" | sort -R | grep -v dvd_extras | grep -v "./$" | grep -v "\.m3u" | grep -v -i ds_store | grep -v ".nzb" | grep -v ".srt" > full-collection-random.m3u

- rsync the TV library. I have several of these, one for TV shows, one for movies, music videos, mp3s etc. It's just somewhat faster to only sync the thing I'm actually adding content to, rather than have to stat the entire library every time I rip a single DVD. The TV show sync tool also deals with the playlists, which are actually created on the NAS drive, so they have to be copied local before syncing or else they'll just get destroyed every day.

This checks to see if the NAS volume is mounted, if not it will mount it and re-run the script.

#! /bin/bash

mounted=`cat /Users/xrayspx/xrayspx-fs01/.touchfile`

if [ "$mounted" == "1" ]
then

cp ~/xrayspx-fs01/Common/TV\ Shows/1\ -\ Playlists/* /Volumes/Filestore/Common/TV\ Shows/1\ -\ Playlists/

rsync --progress -a --delete /Volumes/Filestore/Common/TV\ Shows/ ~/xrayspx-fs01/Common/TV\ Shows/

~/bin/umounter.sh
exit 1
else
mount -t smbfs //192.168.0.2/filestore ~/xrayspx-fs01/
~/bin/synctv
fi

xrayspx's picture

The Lattice of Convenience

Music: 

Def Leppard - Bringin' on the Heartbreak

A couple of years ago, Natalie and I canceled cable since we found it had literally been a year since we watched anything live on TV. I've built a pretty good "lattice of convenience" to store a media library of "Crap we like" and conveniently stream it pretty much anywhere.

Over the years, we've collected maybe 3000 CDs and a several hundred DVDs, including many box sets of TV series we like. I feel like we spent a TON on CDs when we were younger that most people didn't.

xrayspx's picture

My Life Is Going To Suck Without Net Neutrality

Music: 

There are so many things I do which are likely to suffer with Net Neutrality's loss.

I run my own mail, web and cloud sharing services on a VPS that I maintain. Owncloud syncs all my devices, I use IMAP and webmail. I also run lots of "consumer" stuff for myself. I own 2500 CDs which I've ripped and share for my own personal use. I have playlists. I can connect with DAAP from my phone, and listen to my own CD collection, music I have paid for, Spotify style. I know people are saying "Spotify will work just fine", but what if I don't want to use Spotify?

This is all encrypted, personal connections. Nothing illegal is happening here. I'm not filesharing or streaming Torrents or any other grey-area services. It's just all my personal stuff, owned and manually copied myself, sharing to myself. No one gets ripped off here.

I can plug my Amazon Fire stick or Raspberry Pi into any TV and use Kodi to stream my own MP3s or movies, etc. I can use it to watch Amazon Prime or Netflix as well. Kodi also has a wealth of plugins to watch content from sources such as the PBS website. We all can watch Nova, or Julia Child, or even Antiques Roadshow over the Internet, for free, legally. This may all suffer when backbone providers and local ISPs can both decide which packets have priority over other traffic. PBS could be QOS'd out of the budgets of millions.

(Note *)I don't own a Nest or any other IOT garbage, but I have toyed with the idea of building my own, running on infrastructure I build. I don't want Google to know what temperature my house is right now. And I don't want some mass hack of 500 Million Nest users or idiot IOT Lightbulbs to let some Romanian turn my furnace off in the middle of February either.

So yeah, losing Net Neutrality could effectively disable all of this. Small hosts like me could be QoS'd off of the Internet entirely, unless we pay extra /at both ends/. Pay my hosting provider to pay their backbone providers to QoS my address at a decent speed. Then pay my consumer ISP to QoS my traffic so I can reach "The Good Internet", like they have do in Portugal.

This is going to cut my lifeline to my own data, hosted by me on my own machines. Am I going to have to pay an additional "Get Decent Internet Access Beyond Google, Spotify, Facebook and Twitter" fee to the Hampton Inn just so we don't get QoS'd away from our own stuff? It's bad enough that the individual hotel can effectively do this already today, but the hotels are at least limited by the fact that they're in competition with each other and if they have ridiculously shitty Internet that you can't check your mail over, well people would notice that. Backbone providers pretty much have no such direct consumer accountability. No one's going to say "well, fuck that I'm not going to route over AT&T anymore", they might say "Hilton has shitty Internet, I'm going to Marriott".

Some of the most demoralizing part of this is that the rule-makers just don't get it. I already know they don't care, but former FCC Chair Michael Powell's statement, which boils down to "You can still use Facebook, (Amazon) Alexa, Google and Instagram, just like you can now" is missing the point either deliberately or purposefully. That most "consumers" will be fine isn't the point. The point is that everyone be equal, and all traffic be routed equally.

* The risk to my information is proportional to the value an attacker places on the information. Could a state actor target my email server and read my mail? Yeah, the Equation Group or Fancy Bear or some Eastern European ID theft ring could probably exploit some flaw in whatever software serves my VPS, or flat out order the ISP to give them access to my stuff, but why? What does the NSA gain by ransacking my mail server? Not much. How about criminal attackers? However they /would/ expose 1.5 Billion Yahoo accounts all at once, and have that entire corpus of mail to search against, plus passwords they could use to try and attack everyone's bank account all at once.

xrayspx's picture

T**e *h* S**n****s B***i**G, **k* ***m b****n*.

Music: 

Xebox - Bunker Buster

This week David Lowery grumpled many of the Interbutts as he published a list of 50 "undesirable" (read: "un-licensed") music lyrics sites to target for legal action by the National Music Publishers Association (NMPA). With some major exceptions (RapGenius!), many of these sites do, in fact, suck. They're undesirable from an Internet user standpoint as well what with pop-unders and malware.

The fact is, they are worried about lost revenue from the licensing fees these guys should be paying, and the fact that lyrics sites have tons of ads, and that it follows that their owners are sitting on massive piles of cash in the Caymans. So let's go sue 'em all and get that Scrooge McDuck money silo each of them has to have. Here's a better idea, why doesn't the industry run its own goddamn lyrics sites? Well hell, I bet since we live in The Future and all, you could even track how many times someone searches for a song and give Dave Lowry his quarter of a cent per 100 impressions for Euro-Trash Girl lyrics.

The claim that it's "ripping us off as artists" is unconvincing though. If someone's reading the lyrics, you must assume they're listening or have just listened to that song, which they either own or they don't (Keep going after those pirates, I can at least see the point kind of, best of luck). Very very few songs have lyrics that merit reading on their own without music surrounding them. No one is reading the lyrics to Dr. Heckyll & Mr. Jive who isn't also listening to that song right now.

The Musician as modern Shelley is in all but the most exceptional cases disingenuous at best (Fun fact: Search for Percy Shelley on Google, and the #3 hit after Wikipedia and Poets.org is poemhunter.com, one of the NMPA's targeted sites of IP thieves). Off the top of my head, I can think of four musicians whose lyrics I could just sit and read, and even that is only a handful of songs per artist. Also off the top of my head, I can think of zero musicians whose lyrics I have just sat and read as art for its own sake.

It certainly didn't take Tennyson to write Take The Skinheads Bowling.

"Industry Sues Morons, film at eleven". Fine. "Fragile snowflake genius loses livelihood when someone can search for their lyrics for /free(!)/". Well you lost me there pal.

xrayspx's picture

Two Angles on the Country Badass

Music: 

Circle Jerks - American Heavy Metal Weekend

From Mike Ness:

To The Cramps:

I remember reading a record review for a rockabilly compilation (Which we own, and which is awesome) in which the writer claims it's disingenuous for the compilers to draw a line from 50's rockabilly to punk. He said in effect that punk owed nothin' to no one. Anyway, Johnny Cash came up followed by Hasil Adkins in iTunes just now and reminded me of that obvious music hater's review of a really good compilation. The review seems to have gone down the memory hole.

A Short list:

Sid Vicious covered an Eddie Cochran song, and it was popular.

Elvis Costello covered an an entire person. That was popular too.

The Cramps are a thing which exists

The Misfits, Ramones and Clash are also things which exist.

Jim Heath has a career.

As does Hank III.

GG Allin closes some sort of loop.

Fixed Tags:

Pages

Subscribe to RSS - CDs