There’s something still not right with our SD card initialisation routines.
We’ve suspected for a while that there’s something going on with the clock edge for reading/writing responses to/from the SD card.

Our logic probe appears to be displaying what we expect to see, but in code, we’re never trapping the correct responses back from the card. For example, we’re looking for 0x01 from the first init routine but now we’re getting none 0xFF responses, but they’re not 0x01 from the card.

The probe says the card is returning 0x01 but the PIC is getting some other value, so we hacked in a bit of debugging code that we can watch on the “output” window:

// send the cs line low to send commands

// send the initialise command
r=sendCommand(CMD_GO_IDLE_STATE, 0);
if (r != IN_IDLE_STATE ) {
      sendClocks(2, 0xFE);
      sendClocks(2, state);
      return SD_RETCODE_NOT_IDLE;

      // do rest of init code here


The resulting output looked like this….

The logic probe says we’re being sent 0x01 from the SD card but when we echo the incoming value back onto the SPI line, we get 0x7F.

So we’re expecting (and the probe says we’re getting) 0b00000001
And the PIC says it’s getting 0b01111111

However, comparing these to the timing graph, the 0x01 actually comes in during the transfer of the byte 0xFE (our error code to say we’ve had a non-0xFF response from the card). So the PIC has already decided that it’s time to raise an error, although the probe says we’ve had an 0xFF response.

It looks like the response from the card 0x00 followed by 0xFF is being merged by the PIC – it’s taking the last bit of 0x00 and appending the following 7 bits of 0xFF – giving us 0b01111111 (or 0x7F in hex).
So somewhere in this little lot, the responses from the SD card look like they’re being read on the “wrong” edge of the clock pin…….