Resolution Pt. 1

02011-01-01 | Uncategorized | 1 comment

Exhibit A: Schild’s Ladder, a Hard Sci-Fi book that was recommended to me.

Exhibit B: Art Tatum – Piano Starts Here – as performed by the Zenph software.

Schild’s Ladder describes a distant future 20,000 from now, when humans seem to have conquered death. They choose to live embodied, in modified human bodies which can be grown and repaired and contain a number of computing devices, or disembodied as software inside computer-like devices. Either way they seem to live for thousands of years.

This is Ray Kurzweil’s dream, a dream he calls the Singularity, and which others have called the Geek Rapture. From the ancient Taoist masters to Harry Potter’s Voldemort, to Ray Kurzweil – everyone wants to live forever. So far so good.

Zenpho describes their work like this:

Our first offering is a service where we take audio recordings (from any source: LPs, tapes, 78s, CDs, even wax cylinders) and convert them back into the precise, nuanced keystrokes and pedal motions that would have been used to create them. This is done in new data formats which can be played back with phenomenal reality on corresponding high-resolution computer-controlled grand pianos. Rachmaninoff, Glenn Gould, and Art Tatum can literally play “live” again.

When C. introduced the Art Tatum recording to me he wrote:

Never listened to much Art Tatum before. Jaw-dropping. A little weirded out by the recreation software bit. Can’t help but feel there’s something artificial in there.

O:

Impressive.

Do you think the day will come that geeks will realize that there is more to a musical performance than the data, the notes, the velocities and so on. The living, breathing human can do things that are just a little more than that… It is something that shines through a performance and touches the audience, like a candle that shines a light through a piece of fabric and throws a gentle shadow across everything.

S:

A reading from the Zenph website indicates, “The high-resolution specs we’re using vary among instruments, but all offer 10 bits of data to preserve the velocity of each key (compared to 7 bits in regular MIDI), as well as detailed information about the key and pedal positioning. We feel the word “re-performance” summarizes this technique perfectly.” This reminds me of the Bösendorfer CEUSsystem, and I have to wonder if it is what Zenph is using although Zenph seems to have less resolution than Bösendorfer CEUS based purely on their
own disclosure.

The specification of the Bösendorfer CEUS is available here, and the actual details are of interest only technically. But, I suspect that the Bösendorfer CEUS system is extremely nuanced. The technology required to capture Art Tatum or Glenn Gould. But no such technology existed in their respective eras: They are probably lost to time. So I don’t know what Zenph is doing in the case of Tatum. But I’m not certain (in my case) that my enthusiasm for the Bösendorfer CEUS is due to geek style reductionism. I think that Bösendorfer CEUS is actually capturing MUCH more content and nuance than simply notes and velocities. They are actually capturing hammer locations, pre-movement, and so on. Way beyond what MIDI or anything else does, and they are doing it dynamically: i.e., the resolution changes as the performance does. It isn’t simply a hardwired 0-127 step size.

O:

I think perhaps this is the same Kurzweil-ish idea of downloading mind into a computer… it depends on whether a human is only the sum of the electric impulses in his head, or perhaps much more. There is the body, the gut-brain, muscle memory… the heart-mind (the Chinese character encompasses both heart and mind, but in translation most people chose one or the other)… and then there is also the human antenna: I am not sure that we are really isolated instances of consciousness, rather I sense that we are hooked up to a larger consciousness, a network if you prefer.

Anyway, I agree with you that the performances sound rather eerie.

J:

Yeah – I agree with you. There is a lot of mystery in life, and it doesn’t work for me when everything gets reduced to being thought of in terms of computer-style processing. I like your image of the light through fabric to describe that which exists uniquely in a human performance.

A bit of a random thought, but I wonder if shocking performance art will become hugely popular as a backlash to this. Some people might decide they would rather see a person bleed all over the stage than watch a robot play an instrument.

S:

This is what Greg Egan discusses in “Permutation City.” Egan refers to this as “Dust theory.”

But both Egan’s ideas on the nature of the universe and reality, and
Kurzweil’s ideas really assume that the Universe (in Egan’s take) and
consciousness (in Kurzweil’s) are both Turing computable. And this is where I depart from Kurzweil. I don’t believe that consciousness is Turing computable. The reason is because I’m not certain what computation is on a physical level. I don’t think we understand the function of the brain to a sufficient level of detail to be able to say that it is “computing” something.

Consider something such as “free will.” From a computational standpoint, “If free will doesn’t exist, what goes on inside the head of a human being who thinks it does?” and “If free will does exist, what is the
computational algorithm that brings it into existence?” This is of course, nonsense. “free will” is inherently self-contradictory, or meaningless because it has no testable consequences. (i.e., what testable consequences would there be if you “had” free will, vs. if you didn’t and were completely deterministic? What differences in reality would you see? Clearly the question is scientifically meaningless since it lacks falsifiability)

The cognitive algorithms we use are the way the world feels. These cognitive algorithms may not have a bijective correspondence with reality – not even macroscopic reality to say nothing of what’s going on at the
quantum mechanical or quantum electrodynamic level. There can be things in the mind that cut skew to the world.

C:

If pressed, I doubt there are really many geeks who would be so reductionist…
But I have to say that if any recording *can* be successfully re-synthesized, it’s probably solo piano.
In 2011 Zenph is supposed to start re-synthesizing bass, and I’d bet money it’s going to fail a serious listening test/comparison.

S:

I agree with Canton’s take on piano performance, But, disagree regarding geeks: many geeks actually are becoming increasingly more and more reductionist and strident in my experience. It’s kind of surprising.

O:

You are certainly correct about the solo piano being a most likely candidate, because of its mechanical nature. A piano is a music machine. However, exactly because the piano is mechanical, pianists work extra hard to fill it with life, by subtly differentiating their attacks.

If, for example the software and the motors of the player piano have a resolution of 100 velocities for each note, but Glenn Gould happens to play a particular note with a velocity of 57.7, then the software will have to round up to 58, which is an error of .3. Now let’s say we double the resolution to 200 velocities. Then 57.7 becomes 57.7 x 2 = 115.4. Here the software will round down to 155, which would equal 57.5 on our old scale of 100 velocities and would still be off by .2

In order to truly perform the 57.7 one would have to increase the resolution by ten to 1,000 velocities…

And this game can be played for a long time, because good ole Glenn Gould might touch one key with a velocity of 57.7382…. and so on.

You might say a difference of .2 is probably not detectable. Perhaps that is so, but multiply by ten fingers and then take into account the length of the piece and we will come up with something that sounds impressive but eerily OFF.

Now imagine Ray Kurzweil “pouring” his mind-data into a mainframe!! If there is just a tiny bit of rounding up or down here and there, the resulting personality would be way off, don’t you think? Maybe in fivehundred or a thousand years, but certainly not by the end of this century. Maybe not ever.

S:

I think Ray (and those of like mind) need to determine if the brain is even Turing computable. If not, the project is dead ended. BUT … whether your physics consists of fields and particles in space, or flows of amplitude in configuration space, or even if you think reality
consists of mathematical structures or Platonic computer programs, or whatever – I don’t see anything red or green or blue there, and yet I dosee it right now, here in reality. So if Ray intends to tell me that reality consists solely of physics, mathematics, or computation, Ray needs to tell
me where the colors are, and what they mean computationally. In the end, either Ray says that blueness is there, or it is not there. And if it is there, at least “in experience” or “in consciousness”, then something
somewhere is blue. And all there is in the brain, according to standard physics, is a bunch of particles in various changing configurations. So: where’s the blue algorithmically?

O:

They say it’s only a matter of resolution. If we increase the amount of digital snapshots we will get a result that is equal or better than the analog equivalent. Well, audio experts said in the Eighties that in order for digital sound to have the richness of analog sound, it would have to be at least 24/192kHz.

Instead the youth of the world has gotten used to mp3s that are a small fraction of the data of analog sound. 24/192kHz is more than six times the data of a CD… now compare that to the crappy mp3s everyone is listening to.

The problem is that we are working hard at fitting into the parameters of the computer, the parameters of the player-piano velocities, the grid of MIDI – we are much quicker at adapting to the computer than the computer is advancing its resolution. mp3s have advanced to 256kbps in most download stores, but that’s a far cry from where analog sound was forty years ago!

I wonder about the German study Rushkoff cites in his book, where analog music made a difference in depressed humans, but the same music played in a digital format did not… Would like to know when that study was done, which resolution of digital they used and so on.

And I agree with you, the bass will be much harder to synthesize, and cello even harder. At least the bass is at the bottom of our hearing, but the cello is right in the middle of the spectrum and we would notice the smallest mistake.

S:

If the descriptor language were sufficiently advanced, the probability of success would be pretty high though. I can see this happening with systems like the Bösendorfer CEUS.

C:

What I like about what Zenph is doing is that it has (as far as I can tell) very little application or threat to modern recordings. The only place where it makes sense to go through the cost of the analysis and re-recording is when the original recordings are both (a) precious/widely appreciated, and (b) deteriorated or badly recorded in the first place. It’s entirely unlike the history of digital recording, where CD releases instantly replaced competing and better analog formats.

O:

And then mp3s replaced the competing and better CD format… :-)

So now we are TWO steps down from the Seventies, which Jon and I think gave rise to the best sounding recordings. The Eighties were interesting and original, but on a pure sonic level those early synths and drum machines really did not hold up well.

S:

I think that any bowed instrument performance reproductions will be epic fail. There is simply no way to digitally describe the performance of a bowed instrument to the degree necessary along the lines of the Bösendorfer CEUS system. The entire viol family represent the state of the art in technology, and there just isn’t going to be a machine in the next 200 years which will have sufficient computational capability to play those instruments to the degree that someone like Pablo Casals would. And that’s allowing for exponential growth in computationand
descriptor languages for performances. I just doubt it seriously. Of course, I won’t be around to be proven wrong so I don’t have to worry about my own claim chowder.

And that’s where the thread ended for now.

1 Comment

  1. Adam Solomon

    Interesting discussion. I agree something in the piano just sounds off. It *sounds* like MIDI. But as much as we’d hate to admit it, there is some level – though I doubt it’s achievable in the near future – where computer-generated recordings *will* be indistinguishable to the human ear from the real thing.

    Would you recommend Schild’s Ladder? The description looks fascinating, and decay of the false vacuum is a certifiably *awesome* plot device for a novel (although it’s a bit hokey to have it expand at half the speed of light for the sake of plot; such a thing would, to the best of our knowledge, nucleate at the speed of speed of light and thus never give any forewarning of its impending doom :) ).

    Who’s S? He seems like a smart dude (and gets major props for citing Tegmark’s mathematical universe paper!). (Although I admit I don’t get the gist of his “where’s the blue?” question.)

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Archives

Images

Social

@Mastodon (the Un-Twitter)