Encrypting micropython credentials (e.g., WiFi, API keys) #226
Replies: 1 comment
-
If your threat model includes attackers with physical access to your microcontroller as a major security risk, that can be very challenging to protect against. Devices like computers and phones usually mitigate this risk directly by using a Trusted Platform Module as a hardware root of trust to store secrets and to verify that the software running on the device (which accesses secrets and thus could potentially be modified to leak secrets) has not been tampered with. As you might expect, that would introduce complexities if you want to enable users to modify the code running on the device - either the user needs to cryptographically sign their modified version of code (and provide their signing key to the root of trust) so that the root of trust will accept the modified code, or the device needs to communicate to the user that the code has been modified and allow the user to choose whether to proceed with running the modified code. Compared to using a TPM to establish a chain of trust over all code running on the device, it will probably be simpler and easier for people to modify software if you mitigate risks of physical attacks with a combination of other mitigations which you should do anyways, including:
If your threat model has accidental secret leakage from software as a more relevant risk than a malicious attacker with physical access to your device, a TPM may be less important. For example, you can isolate software modules which handle secrets from modules which may need to be changed frequently or customized easily, e.g. by splitting them into processes running on separate microcontrollers which communicate by passing messages and don't share memory. For example, your device could have a security-sensitive microcontroller which holds Wi-Fi secrets, MQTT-related credentials, API keys, and device identity credentials; and it would act as the MQTT client, API client, etc., and it would act as a proxy for network communication between your "application" microcontroller (which has all the non-security-sensitive logic) and the outside world, by forwarding messaging/request-response data between your "application" microcontroller and the MQTT broker/API servers/etc. |
Beta Was this translation helpful? Give feedback.
-
The Pico W is designed with education rather than encryption/security in mind. If someone gets a hold of the microcontroller, it's very straightforward to get anything stored in
secrets.py
for example (WiFi credentials, API keys, etc.), since everything is in plain text. There are some devices like ATECC608 that help to address this issue, but I'm not sure how that would be used to make the contents of asecrets.py
file secure. It doesn't seem like there are examples out there for WiFi credentials specifically.Also, if the variable ever gets stored or accessed within the code, even if it's encrypted on some external device, all one needs to do is add a print statement to the code and run it again.
WiFi Credentials
Some risks can be mitigated. For example, use a standalone router + IoT sim card. Risks are then restricted to only the devices that are connected to that router. Rate limits and thresholds can be securely set online using the IoT sim card vendor portal (e.g., Hologram's SIM Card activation page). Someone could still connect to the network and siphon bandwidth for whatever purpose they want, but there would be an upper limit on how much damage (i.e., data usage charges) based on the limits and thresholds set on the IoT provider side. Likewise, someone could potentially "brick" or cause significant lag to the other devices by connecting to the network and trying to use it constantly, but if the system is designed with stop conditions for when devices are unresponsive, then the risk is limited to halting experiments.
Data Logging (e.g., MongoDB)
I think this is largely taken care of in the process I use for setting up MongoDB (see Star Protocols manuscript) based on brainstorming in #127. Some device-specific ID can be used to ensure "John is actually John". Risk is restricted to only the device and the permissions associated with the device.
Device commands (e.g., MQTT via HiveMQ)
Similar to MongoDB, HiveMQ has access management support, even on the free tier. You have to set separate credentials for each device, but this is a way to limit risk of unauthorized access/control to only the device that's been compromised. Someone could steal the credentials unbeknownst to the researcher and use that to both control the device and fake the data that gets sent back to the central brain (i.e., "Steve pretends to be John"). Inevitably, if some is able to control a physical device, there are physical risks involved. This can be mitigated somewhat by having hard-coded safety checks on the device (#127 (reply in thread)). However, if someone had temporary access to the device (i.e., in order to steal the credentials), that also means they also had the ability to change the source code unbeknownst to the researcher. There may be ways of "hardcoding" constraints or safety guards directly into the system (e.g., breakers that impose power limits, sprinkler systems, safe working distances); however, limiting physical access to the equipment itself and monitoring who accesses the equipment may be one of the only truly robust ways to mitigate those risks. Short-range communication (e.g., Bluetooth, LoRA, and even physical connections), would also help to mitigate the risk of unauthorized control that can take place remotely and long after credentials are stolen (which would otherwise make it very difficult to track).
Some other thoughts:
Particularly relevant is:
Summary
Beta Was this translation helpful? Give feedback.
All reactions