Surprise, surprise - idle chatter about TBM

After flipping back and forth for awhile I found out that the easiest way to crack TBM is to read it from A to Z. Rather enjoying this, now I regret not starting to write down new wants as they revealed themselves to me.

Would have loved to see a regular TBM column in UT or Flashback! Surely there must be stories, pictures and collector speak to fill volumes for many years to come.And it would be in print, rather than on blog's that have a tendency to vanish just when you started to appreciate them. And an extra added bonus to those of us that likes to read in the bathroom (3 children related)
 
Mr Segment - A TBM 'zine is in the planning stages.
While I enjoy websites and blogs, I feel that other people do that better than I can / could. A 'zine is much more tangible to me.
I have features planned relating to the book, as well as some historical accounts.

I also want to respond to a couple of criticisms about the book:

1- The "Hometown" chapter was written to show that the same kind of musical activity occurred just about everywhere - no matter how small your own town was in population at that time. Look at the details of the performances listing- for a small town to have so many groups in town, and close-by proves that only a small fraction got the opportunity, or had the right mix of circumstance to have a 45 released. The xerox of newsprint clippings in the Dance, USA chapter is another case in point. Anyone interested can put the time in to research their own area and find similar results.

2- The 'cabinet members' selected for the eight year long rating project ARE knowledgeable, with varied personal tastes. This is explained in the book. There are two guys who find most of the BFTG songs to be over-rated. That doesn't mean they automatically voted songs with a bias against them, but something you might think is a 9, they voted it as a 7 or even 6. There are psych heads on the team, those who gave very high votes to songs like "Ostrich People" while others (perhaps those who operate by the punk ethos of garage) found it to be mediocre. And there are several guys who love the melodic type tunes best.
Thus, to state the ratings don't mean anything, or that the ratings are not a reliable barometer of a songs' consensus-derived value is utter bullcrap. If more cabinet members had been involved, the results would not differ overall - perhaps the 4 5 and 6 ratings would be up or even down by one full rating. The only slight bias I could see was the preference toward 45s from the voter's home locale or region.

3 - Regarding honorable mentions - those informative capsule bits were written to clarify factual info. Most examples, such as the Strawberry Alarm Clock's historical account point out correct information, unlike what you might read from other sources. These capsule bits also note why the act is listed in TBM in the first place. There is no need to explain why a group like the Brogues are listed in the discography.
 
Print is always superior to digital IMO. A spinoff TBM zine, preferrably with a bit of Kicks-like hamburger & beer mentality informing the garage archeology, would be a delight to hold in one's hand.

That is the plan. The Sonics story (you know them erroneously as the Sonics Inc.) is currently being prepared as the big story for TBM 'Zine #1.
 
I may be able to help with a little label info on a record or two down the line, if you'd like me to.

I'm also curious about the band The Contenders who did "Johnny B. Goode" on Chattahoochee. Cool record, definitely garage, imo. Was there a reason for its omission?

Oh, one other thing. There are 63 10's and only 55 9's in TBM. I realize why this happened but, statistically speaking, wouldn't it make more sense to have more 9's than 10's? I assume there are more 7's than 8's, more 6's than 7's. etc.
 
The differences of personal opinion on, for example "Last Time Around" (one guy says "meh" while another says killer) proves my point regarding the consensus methodology for voting. And, as a result, the depicted TBM rating is a true reflective rating of a song's musical quality. If you think that a song rated an 8 should be a 9, well, that could be a valid disagreement. However, if you champion a song rated in TBM as a 5 as a personal rated 10, well, your opinion is in the minority and not reflective of everyone else's take.
I'm more interested in finding out about records I don't know , but might like . Therefore I would have prefered the personal opinion approach . By referencing the records I already know it's easy to find out if somebody has a similar taste and take it from there . Fortunately for me you did a great job with the song descriptions . Reminded me of some of the set sale lists I used to get , there sellers actually tried to describe a song , instead of just hyping .
 
I may be able to help with a little label info on a record or two down the line, if you'd like me to.

I'm also curious about the band The Contenders who did "Johnny B. Goode" on Chattahoochee. Cool record, definitely garage, imo. Was there a reason for its omission?

Oh, one other thing. There are 63 10's and only 55 9's in TBM. I realize why this happened but, statistically speaking, wouldn't it make more sense to have more 9's than 10's? I assume there are more 7's than 8's, more 6's than 7's. etc.

Any info is more than welcomed, please send along anything you might have. That goes for everyone else as well.

Contenders - I was unable to get a song clip for it, that is why it didn't get included at the last minute. It really was a big deal to reformat every page when an addition or correction needed to be made. I know we corrected every A to Z page at least 25-30 times during the course of the rating process. I keep mentioning page space; as a result it is hard to fathom the time and effort involved to even make a simple addition to an existing page without disrupting pagination.

Why would it seem that there should be more 9s than 10s? You are only talking about a difference of less than 10 songs between the two values. And this result between the two values is less than a hundredth of a percent, when you factor in decimal means of 14,800+ rated songs in the whole book.
If it were a larger differential showing more 10s than 9s, it would seem as more of an anomaly. There are more 4s than any other rating value, followed by 3s, and then 5s. That's how the voting played out.
 
Thanks Mike, for taking the time to respond. On the 9's vs. 10's thing, I'm thinking of the ratings being like a bell curve of sorts, with the 4's being the "meaty part of the curve". I would assume that you had fewer 1's than 2's and fewer 2's than 3's. On the top end, you probably had fewer 8's than 7's, etc.

Of course, none of this is a big deal. The book is something that already referred to numerous times, after the initial "couldn't set it down" phase. I also appreciate your efforts to correct any errors that have been found. Not many authors would go to the lengths that you have to do that.
 
I raised the same question as Troggy back in the old garagepunk forum, regarding the normal distribution aka Bell Curve. In a stochastic process spanning the output of a minimum X to a maximum Z, values will cluster around the mean Y and then distribute symmetrically in decreasing numbers to the right and left of the mean. As we all know.

Any deviations from this Bell Curve are an indication that the measuring (or in this case, rating) process is somehow askew, or that an unidentified factor is at play and influencing the process. I agree that it is not a major deal that there are more 10s than 9s (instead of vice versa), but it does signify... something.

Similarly, there should be more 5s than anything else, and in view of the integer truncation which removes all intermediate values, there should be a noticably larger amount of 5s than 4s or 6s. The expected mean value of the population is 5.5, which means that the TBM "5" absorbs values on both sides of the maximum for an additional boost: since 5.0 to 5.9 all become "5".

Or?

Topics like this is what the 60s were all about. :cool:
 
The overall mean is 4.333. There are more 4s than any other value.
The curve is not a centered curve, and the total difference between the # of 9s and 10s would show the curve as a near normal ending slope, based upon the total number of values the curve reflects.

The normal distribution / bell curve is often the result of adding some sort of calculation to create a smooth curve, not a result of tabulation from raw numerical figures. For instance, I am teaching a class, and I have 30 students. I give a test. A normal distribution would imply that there are more students gathered around the mean for their grade, with the fewest failures and near perfect scores on each end. But it doesn't usually work that way. I could have 8 people "fail" with a score of 60, 4 people with a 70-79, 7 people with a score of 80-89 and 11 people who scored 90-99 out of a possible 100. There is no classic "curve" from this result.
 
Fair enough, though with regards to your example I recall more than one high school teacher who explicitly used the Bell Curve when setting grades. In other words, some poor sods receieved the lowest grades (except for outright failure) in order for the top students to get top grades. And vice versa. It seemed like a dumbass, mechanical approach to a reality of living people from many walks of life, which is probably why I still remember it. Before leaving this moderately exciting subject, here is a "model" distribution I generated based on the TBM scale. What this means is that if there are 10.000 tracks given 1-10 ratings, 440 of them should have a "1" rating. Etc. In a hypothetical Newtonian-Cartesian universe of wyld garage punk snot.



stdev 3,02765
avg 5,5

1 4,4%
2 6,8%
3 9,4%
4 11,7%
5 13,0%
6 13,0%
7 11,7%
8 9,4%
9 6,8%
10 4,4%
 
Here are some quick tally results per garage-o-meter (number of songs that received each rating)

10s = 63
9s = 55
8s = 191
7s = 609
6s = 1,519
5s = 2,760
4s = 4,564 (approx 820 songs achieved a 4.000 rating)
3s = 3,600
2s = 1,240
1s = 182
 
Here are some quick tally results per garage-o-meter (number of songs that received each rating)

10s = 63
9s = 55
8s = 191
7s = 609
6s = 1,519
5s = 2,760
4s = 4,564 (approx 820 songs achieved a 4.000 rating)
3s = 3,600
2s = 1,240
1s = 182

What you show here looks like a bell curve, with 4.333 being the average. The issue is that, by your method, what you really had were 118 9's and you had to award bonus points to create some 10's. I agree that your methodology in this case is no big deal.
 
What you show here looks like a bell curve, with 4.333 being the average. The issue is that, by your method, what you really had were 118 9's and you had to award bonus points to create some 10's. I agree that your methodology in this case is no big deal.

Right, that is explained in Reverbaration, which probably few have read as yet.
without the algorithm to factor bonus points for songs which received a 10 vote, none would have made it past a 9.
As for "rounding up" to solve the borderline songs which just missed moving to the next highest number - that creates a flawed result, thereby penalizing songs which did not achieve the borderline tally. Those as well would have to be rounded upward. And what about rounding downward, then?
 
Right, that is explained in Reverbaration, which probably few have read as yet.
without the algorithm to factor bonus points for songs which received a 10 vote, none would have made it past a 9.
As for "rounding up" to solve the borderline songs which just missed moving to the next highest number - that creates a flawed result, thereby penalizing songs which did not achieve the borderline tally. Those as well would have to be rounded upward. And what about rounding downward, then?

You could have rounded everything that scored 1.501 to 1.999 to a two, for example. Everything from 2.000 to 2.500 would also rate a two. You would have had far fewer ones using this method and everything that scored from a 9.501 to a perfect 10 would be the tens. Would there have been any?

You always have to draw the line somewhere. As long as you're consistent, you're good.
 
Right. I was hoping this thread would have been more about the actual info brought forward in TBM than an endless discussion about the ratings. Sure, a lot of them rub me the wrong way as well, but hey, I´ll ignore them after reading all the fuzz for the last few days!

Children of the Night - World of Tears. A girl singing. Cool. I had to check Banjo Room, who had a small feature on this group years ago, to see if I´d blinked and missed the girl vocalist info. But no, they did not mention it. Would never have guessed though.
 
The girl singing for the Children Of The Night died tragically in the 70s, she was an equestrian buff, and was in a horrible accident - a horse read up and kicked her in the head. I know someone who was very close to her - she really was a cool 60's chick for a high school aged girl in '67.
 
You could have rounded everything that scored 1.501 to 1.999 to a two, for example. Everything from 2.000 to 2.500 would also rate a two. You would have had far fewer ones using this method and everything that scored from a 9.501 to a perfect 10 would be the tens. Would there have been any?

You always have to draw the line somewhere. As long as you're consistent, you're good.

Well, no, that doesn't wash - a song that scores a 1.500 is still a 1, but a song that is 1.501 = a 2? What, did you guys all fail math or somethin? :lol: That is a huge decimal difference, and flawed reasoning. What about the songs that scored a mean of 2.000 - they have to be rounded upward, as it is unfair that a song scoring a 2.000 should be placed in equal comapny with a song that scored a lesser decimal amount than 2.000. 1.999 is not equal to 2.000 no matter how you may argue otherwise.

Rounding a decimal value upward works fine in monetary matters but not when ascertaining a rating to measure performance. Tossing out the high vote, and the low vote (standard deviation) is also flawed because it signifies that the person who voted a song the highest doesn't count, ditto for the low vote guy. You have to employ some common sense along with computations.
 
Not sure I follow your reasoning, Mike. In the world of everything as we know it, rounding is more common practice than truncation. I bet most people who read the TBM casually are going to assume the ratings to be rounded rather than truncated. I would call "unfair" to give the same integer rating to a song with "1.99" as a song with "1.00". Why is this more just than giving "1" to a "1.50" and "2" to a "1.51"?

That said, I still think the TBM method is sound and consistent... I just wouldn't call it a given.

I was hoping this thread would have been more about the actual info brought forward in TBM than an endless discussion about the ratings.

First things first. This is merely an overture for the actual TBM talk.:cool: