Teenbeat Mayhem launch date......

A second book is already completed, minus tweaking and implementing to a pro-layout program, but it will only appeal to TeenBeat Mayhem collector oriented diehards, those who want to know the entire rankings from #1 to 14,185 or so - it will list every rated song in descending order. Ranked by the mean along with the full-vote breakdown each song received. The book also has a cross-ref list by title, so you can find a specific song and the vote tally results. I doubt this would sell more than a couple hundred copies, so it will be limited. I know that if my bunch of favorite 60's garage songs failed to make the official Top 1000, I would want to know how they fared, and where they ranked overall. This is the book you would want if you're like me. It will run about 175 pages or so. Inexpensive, too, soft-cover type deal.

Will TBM (the main book) list all 14,185 songs, or just the top 1,000 or...?
 
Will TBM (the main book) list all 14,185 songs, or just the top 1,000 or...?
Every song that we could locate (well, that I could obtain via my own collection of 45s, cassette tapes and CDs in addition to important contributions from G45 fanatics BossHoss, JoeyD, George, Rich, Jeff and others) was voted on - that is where the 14,185 (more like 14, 200+) total comes from. Almost 90% of listed 45s in tbm have been auditioned and rated by your fellow expert cabinet members.
So yes, every song that was rated is shown by a number from 1 to 10. Only 62 true killer defined "10s" in the entire field, as I've mentioned several times in the past. It is all explained in the book.

The Top 1000 of these 14, 200+ songs are all depicted by color label scans. The ranking as to the all-time greatest USA Garage tune (#1) on down to 1,000 (which, according to the mean calculated from the vote tabulation) required at least a 6.8xx to make it into the Top 1000). BossHoss did a stupendous job (well, he had to do it twice - yikes - and he still talks to me!) and I know all will be pleased with the visual layout. Rather than just printing text listing, I thought it would be far cooler to show the results by slotting the appropriate label scan in place of a title.
Whether or not you agree with the Top 1000 consensus derived ranking is another matter!

The 2nd book is for the geeks and statistical junkies like me, every song is ranked by the vote tally average (the "mean" for you math fanatics) . Each and every song, from #1 to 14,200+ in descending order.
No label scans in this book (far too costly, too many pages, etc.). But what the listing will show is the full totals (the full decimal mean [i.e, 6.906 instead of 6 ], and the vote tally breakdown [how many people voted a song a 10, a 9, an 8, etc) so you can see the result of the vote sessions for every song we rated. There was obviously no page space room to include such details in tbm, which is only focused on documenting the 1 to 10 ratings for the songs. As I do not "Round up" any rating ( a 6.906 mean is documented as a "6" in tbm) it gives you an idea of how each song fared / where it ranks overall, etc.
 
Whether or not you agree with the Top 1000 consensus derived ranking is another matter!

I can't imagine anyone having a serious bone to pick with the way the votes turned out. In any case, the usefulness of the book must be greatly enhanced by including the ratings. For anyone starting out collecting comps or 45s, it will lead them in the right direction.
I certainly wish I had that knowledge when I started out collecting!
 
Mike,
A year (or two) ago I discovered a blog I thought was yours called Teen Beat Mayhem.Forgive my ignorance here but is this the same blog? There was info on about 6-10 45's there.
 
As a belated Holiday gift "peek" I thought I'd toss up a few titles with their vote breakdowns. This in-depth data will be disclosed in the follow-up book to TeenBeat Mayhem! Hopefully, this example from the 2nd book (Inside Teenbeat Mayhem, maybe?) will avoid confusion as to what I've been recently posting about in this thread.

The following titles, extracted from the last page of the 2nd book - the alphabetical order listing (which follows the entire 1 thru 14, 240+ list by ranking) shows: # where it ranks in the Top 14,200+ list, the mean (average yielded by the vote tally total), and then the actual vote breakdown - 1st number pre "dash" denotes the garage -o-meter 1 to 10 value rating, the 2nd number after the dash shows how many cabinet members cast that garage-o-meter value.

7,686 your turn to cry – forum quorum 4.250 6-1 5-2 4-3 3-2
2,203 your turn to cry – lord douglas & serfs 6.000 8-2 7-1 6-3 5-3 4-1
4,085 your turn to cry – new lime 5.200 7-1 6-2 5-5 4-2
8,606 youth quake – mystic zephyrs 4 4.0906-2 5-1 4-5 3-2 2-1
13,781 youth quake – skunks 2.300 4-1 3-2 2-6 1-1
14,088 yum yum eat em up - noises & sounds 1.700 3-1 2-5 1-4
9,828 zebra in the kitchen – standells 3.846 6-1 5-2 4-6 3-2 2-2
4,399 zeke, the – preachers 5.100 7-1 6-2 5-5 4-1 3-1
13,807 zelda klotz – new harlequins 2.222 5-1 3-3 2-2 1-4
7,389 zig zag news – sound sandwich 4.300 7-1 6-3 5-1 3-4 1-1
3,168 zipped up heart – reign 5.500 8-2 7-1 6-1 5-3 4-2 3-1
11,957 zipper – wally & the rights 3.300 4-3 3-7

Using the Forum Quorum song as an example, you can see its rank on the far left, followed by the mean (average) in full decimal value. TBM will show the song as a "4" - simply no room on the page to show the full decimal value. The vote breakdown is read as 1 member voted the song as a "6" out of 10, two members voted it a "5", three members voted it a "4" and two members awarded it a "3"

You'll note the wide variety and tastes (personal preferences) of the cabinet members (one member rated the Sound Sandwich tune highly, while another has a strong dislike for the tune). Rare it was for member preference / tastes to be almost exact across the board (Wally & The Rights). I feel the overall consensus works well and yields a very fair and conservative showing. The bias of flag waving for one's locals, and top-heavy "everything is great unfocused voters (guys who give everything a 7 or higher - thankfully I only had a couple of those on the team) is balanced by the 4-5 guys who are very strict with their appreciation as to what constitutes a decent tune from a good, excellent, and outright great / killer song.

For those extra-observant types out there, the reason that the participation varied for songs is that the effort required 8 years of work. Only a handful of members were able to participate in every single session.
Any song which received at least one "10" vote cast by a member was required to yield a full member vote (14 votes). Otherwise, a bare minimum of 8 our of 14 member votes was required for the song to be officially noted and ranked. The normal participation amount was either 10 or 11 members, around 80% of the sessions.
Some sessions, like those which ran for the letter "S", had nearly full cabinet member participation (must've been during the winter months), while others required drafting of the fill-in guys (to whom I'm ever thankful, couldn't have completed this without the bench guys!)
 
How were the "voting" cabinet members selected? I mean, what were the qualifiers to be considered for a voting position? Not that it really matters, I'm just curious.
 
I have to ask that after reviewing this...how did you do it in only eight years? I'm half-kidding, but that's an enormous task to undertake. It's going to take at least another eight years to absorb it all.
 
Tom - Guys were asked to participate, most of them were long time pals who understood the process of what I wanted to accomplish without me having to explain everything in detail. A few of them quickly realized just how much of a time investment this project would require on their behalf, so they politely bailed out after a few sessions. Thankfully one or two asked to be used whenever needed as the bench "fill-ins", which worked out nicely.
All of the members selected for participation have different musical tastes when it comes to 60s rock & roll, as focused thru a garage sound prism. I didn't want to have a team comprised of Tim Warren personality types, otherwise a true and fair consensus would not have been possible. I believe I had 21 members in total, both full and part-time on the team.

One guy who was eager to participate as a member of the cabinet team demanded that I "give him" full dubs of every song we would rate for the book in order to compensate him for his "time". Well, he was politely dismissed, and replaced after the very first session. As time involvement / workload was a big factor, I rarely uploaded an entire song from start to finish. It was far more expedient for me to use the 2 minute running clip edit for each song. 95% of the time I used roughly 2 minutes, which provides enough of the song to yield a fair evaluation. In cases where an important tempo change, or lead break (organ, guitar, etc) did not appear until post 2 minutes, I would extend the sample beyond my standard two minutes. Sometimes I didn't have more than 1 minute of a clip to use. Also, I didn't want the songs to appear elsewhere on the internet via blogs, or on bootleg compilations.

Mike - well, the ranking project might have proceed at a much faster clip had my initial notion come to fruition - I thought that I could just run a list of songs, send them to the cabinet voting team. I would upload those obscurities, and non well-known songs. For the songs well-known, I thought that the guys would pull out their comps on LP / CD, or 45s and listen to evaluate their numeric vote. Ha! That took too much time and effort on their part, so I was the one who had to provide every single song in clip form, which required recording the song from various sources - the original 45, or the comp (vinyl, CD cassette, whether home-made, bootleg, or official release). I whittled the workload process down to a quick running routine after a few months, and refined it over the years, but the time investment would crush the "gotta get this DONE" drive of any sane individual.

Someone once asked me why the book was taking such a long time. I replied that the attention to detail required to pull off a reliable reference tome is akin to compiling a printed telephone directory for the entire United States from scratch (pre-internet era, natch).
 
Thanks, Mike. I find all aspects of creating this masterpiece very interesting. Like your follow-up book about the making of TBM. The guy (Mark Noble) who made that music movie about the Rock scene in Dallas/Fort Worth area in the 60s is doing a follow-up of the movie with how the movie came into being. It's all interesting.

Based on what you said on selecting the cabinet members, you did it right. You could not have had all "Tim Warren" minded people on the cabinet and believe you got a fair representation. Did you employ any recognized and accepted statistical gathering methodologies? I did an extensive indepth study of an issue when I still worked. When my results were meton to discuss the "what next" part of the project, a lady with statistical training at Penn State questioned my method of selecting which groups were included in my study and asked if I had used any recognized methodology. My methodology was taught to me through years of experience in my field and the internal workings of our outfit, knowing who to ask and just plain common sense. Well, because I hadn't used recognized means of selecting the study group, I had to have her tell me who to study and do it all over again to give the results "credibility". Well, I did the damn thing again. Ha - the results were exactly the same as my methodology. But, now it was credible. I hope no one does that to your book.

If someone can create doubt as to the data integrity, it would probably really hurt your sales outside of forum members.
 
I doubt sales outside of the forum will end up in the hands of expert mathematicians and statisticians!

Funny you mention the "credibility" angle, Tom.
A good friend is a math professor (statistics) at a major University.
When I explained to him just how the tbm ratings were constructed, he started shooting holes in the process and the results. Looking at it from a purely scientific & mathematics view, the mean is not considered a reliable unit of measure. Second the small number of participants would not be considered credible (a "small" number of participants which comprise my cabinet team would not yield a credible "scientific" sample).
Of course, I have valid reasons to deflect the onslaught of credibility piercing "bullets" which (sort of) helped my friend comprehend what I'm trying to construct and portray with the results.

First off, the ratings are subjective, and therefore, cannot not operate or function as a study that is purely scientific. There are no absolutes (such that can be measured "concretely" with defined boundaries like genre names, styles etc.) Everyone is going to form their own bias and preferences, based upon their own mental frame of reference - regardless of what one has learned. Therefore, it is subjective process I am measuring and collating. Scientific "anti-tbm" rebuttal = toss it out the window.

Second, I was fortunate enough to have more than a dozen willing participants to contribute to the project of song ratings. My math pal said my "sample" should be much higher. Well, he's outta his gourd if he thinks I can get 50 or 100 people who know enough about 60s garage and all that to help and cast votes. I'd have to pay them, plus I'd never get the task completed. Scientific sample crap = tossed out the window.

Then there is my preference to rely on the mean to illustrate the vote tally rankings, but that is another topic entirely. I will say that using a deeper statistical method of measure devalues the high and the low votes. With the tbm song ranking project, the 7 and the 1 (as voted for "Tow Away Zone" in the above listing sample) are important in documenting the subjective reasoning from each voter. I strongly believe that devaluing the high and the low (known as tossing out or deflating the mathematical impact of the highest and the lowest as per the "skewed" sampling viewpoint) creates an altered result, and not that of a true "voter consensus". Every cabinet member's vote is equally important, which is how it should be!
 
I'm not against anything. I wish I could have come up with your response when that "lady" did it to me.
 
Exactly what I ran into. But, her way gave the same results credibility. Go figure. She has a degree and I don't, therefore, what could I possibily know. What a crock. I'm not trying to put a downer on anything you've done, Mike, it's just that the possibility of some over educated smart-ass could bring this up. It seems they have and you have a response. All is good.
 
the ratings are subjective, and therefore, cannot not operate or function as a study that is purely scientific. There are no absolutes

Music is by humans for humans and not by machines for machines, so your book is reflective of the source and the audience, really dry objective books about art are for people who probably don't feel anything from it anyway. The educational arms race has gone too far in many ways, pieces of paper in place of individuals, and yet it was individuals, often outsider and against the current status quo who made most of the big scientific breakthroughs and discoveries, things that made the pieces of paper of their day obsolete and in need of rewriting. Einstein failed math in school and like all that! Time and again the stone the bureaucratic minds would throw out is so often the key stone, and it sounds like your book will be a keystone of it's subject. What we need is a little less cut and dried precision on pieces of paper of the moment and a whole lot more opportunity in the real world again.