In SSL/TLS, the client uses the server's public key. Since in general the client does not know the server's public key in advanced, it expects to obtain it through the magic of the Public Key Infrastructure: the server's public key will be presented in a certificate and the client will be able to verify that:
- the certificate contents are genuine (that's validation: signatures and names and certificate extensions are correct and link up to a trusted root);
- the certificate is owned by the intended server (the expected server name appears in the
Subject Alt Name
extension of the certificate, or its Common Name if there is no SAN extension).
Now all of this is needless complication if the client already knows the public key; if that key is known a priori, then the client can just use it and simply ignore the certificate sent by the server.
For the sake of compatibility with the formal specification of the protocol, and to more easily reuse existing libraries and implementation, it is probably even simpler to do what you suggest, i.e. verify that the server's certificate is bit-to-bit the expected certificate, and let the code extract the public key from that. The "fingerprint", if it is computed with a hash function which is resistant to second preimages (e.g. SHA-1), can be used for that check. This is fine.
In your case, I understand that the server is the device, and the client is your application. Your application won't be fooled (talking to a fake device) only as long as the attacker has not compromised the device's private key, i.e. extracted it from a device. If you have several devices then each device should have its own private/public key pair. If all devices have the same key pair, then the private key cannot be considered "private": a secret which is known to more than two or three people is not a secret, but a rumour.
The "safe" way is then to have some sort of initialization phase, under controlled conditions, where the application instance learns the fingerprints of the devices to which it will thereafter connect (maybe the application generates the public/private key pairs and self-signed certificates, and imports them into the devices).
This is the security model used in SSH: the first connection from a client to a given server requires an explicit confirmation (the client shows the server's public key fingerprint, and the human user is supposed to verify it by, for instance, phoning the server sysadmin); afterwards, the client trusts that public key because it remembers the fingerprint.
The "remember the key" model works well, but you must be aware that it forfeits the PKI feature known as revocation: an out-of-band, automatic mechanism to convey damage containment information. If the private key of one of the devices is compromised, then the thief can thereafter run a fake device and fool your application into connecting to it; to avoid this situation to persist, the application must somehow be warned that a given certificate fingerprint must no longer be accepted. Revocation checks, with regularly published CRL or OCSP responses, is an automatic method to do that. When you have remembered fingerprints, there is no PKI, thus no CRL. But the need may be still there.
If you follow the no-PKI road, with embedded certificate fingerprints, then you have to decide whether you need something to ensure the job normally done by CRL.