KryoFlux - Timing and Measurement


This discussion is related to the previous WIP about overflow - why it is not enough to just partially measure a signal, and researching whether the hardware was capable of the automatic read/restart of the timer measuring on the same edge, a feature that actually worked out for a change.

The pin descriptors in our code have a setting indicating whether a port is active low or not, so we only need to set the logical level in the rest of the code. The hardware interface code translates this automatically to the correct physical level, and this abstraction is also true for the sampling setup. Together will make it much easier to port the code to different devices, without having to change the host software.

This is what happens in the drive for the active asserted signal:

  • The drive detects flux change - a change of magnetic state, not a signal.
  • The hardware generates a signal, the exact timing and duration (width) of which depends on the circuitry used.

The duration is something that should be good enough to be detected by other hardware. Similarly the index pulse is generated as a signal that can be detected. The width of the index pulse can change from drive type to drive - or even between any two drives.

So even though the the width of a flux reversal or an index signal is constant, it depends on the attached drive. Also, since it’s not supposed to be measured, it’s probably being generated by something cheap, like capacitors, where the exact timing values can change over time. The point is that you have to measure the real time of the signal from the same edge to the same edge, that is, falling edge to falling edge, or rising edge to rising edge, depending on whether the signal was active-low or not. Anything else will be unreliable and drift too much after hundreds of thousands of samples.

Why does the exact time between consecutive index signals need to be measured?

To get the correct RPM. We do not know the current drive speed when sampling, and it is never true to say that it is a fixed constant, and we cannot expect the user to know the correct value. The time measured for a sample (flux reversal period) must be corrected using the speed of the drive used for reading, and the speed of the drive of the intended target platform.

A simple example: a signal measured as 2us when read on a 300 RPM drive, would be measured as 1us on a 600 RPM drive (the disk rotates twice as fast, halving the time it takes to pass the same distance), or would be measured as 4us on a 150 RPM drive.

Which one is correct?

As you can see, interpreting the very same flux reversal time depends on:

  1. The speed of the drive used for sampling.
  2. The speed of the drive that would be used to interpret the data.

One without the other is always incorrect. The ratio of the two RPMs gives the correct absolute time a flux reversal takes according to, and as seen by, the target system - the system that would be meant to read the disk for real. So even though you’d give the target system’s drive speed (ideal and expected) as a constant per track (e.g. say 300 RPM for an Amiga DD drive) you’d have to be absolutely sure of the speed of the drive sampling the data. However, drive speed is not entirely constant, it slightly changes. Even if it is very stable, it may have been set incorrectly, like say 301 RPM instead of 300 RPM.

Since there are hundreds of thousand of samples, the differences add up eventually.

The alternative to specifying the expected constant for the reading drive’s speed, is to actually measure it. You need a timer that you can compare against to be able to tell how long it took for two consecutive index signals to be seen.

For a perfectly aligned 300 RPM drive the time measured would be 200ms; 300 Revolutions Per Minute=5 revolutions per second, 1/5 second is 200 milliseconds. We don’t want to assume this is the case (we know from experience that it very often isn’t), and so the current RPM is continuously monitored and recorded by the firmware for each revolution sampled.

The index signal itself is also important, as it is the only marker on a disk that can be used to perfectly align data when writing, or deciding on the exact position of data when reading.