What to protect? - and concrete developer actions
Cyber security is a very complex subject - filled with heavy math, but also with a lot of compliance and regulations. Before opening the latter - sometimes boring box - let's take a look at what we want to protect. I am using a lot of terms here. Many of these are - to a degree - explained in a table at the bottom of this page.
What do we want to do?- Protect data in transit
- Use TLS 1.3 or DTLS with AEAD ciphers and ECDHE for forward secrecy.
- Prefer mTLS for device authentication or use unique per-device tokens/certs.
- Use vetted libraries (mbedTLS, wolfSSL, BearSSL) and offload crypto to hardware where available.
- CI tests: reject weak ciphers, verify handshake behavior, test MITM and downgrade attempts.
- Protect data at rest
- Use authenticated encryption (AES-GCM, ChaCha20-Poly1305) for confidentiality and integrity.
- Do not rely on CRCs for security; use MACs/AEAD for tamper detection.
- Store keys in HW-backed keystore/secure element or protected flash with MPU/TrustZone.
- Tests: tampered blobs must fail verification; firmware dumps must not expose plaintext secrets.
- Protect boot-phase
- Establish a hardware root-of-trust and verified boot chain (ROM, signed bootloader, signed app).
- Implement anti-rollback (monotonic counter or version checks) and safe recovery paths.
- Lock or fuse debug/boot configuration in production as appropriate.
- Tests: device must refuse unsigned or downgraded images; secure-boot keys protected.
- Protect source code
- Enforce SCM access control, branch protection, mandatory code review and code-owner rules.
- Integrate SAST, secure-coding standards (MISRA/CERT), and pre-commit hooks into CI.
- Remove secrets from repo; use secret managers and secret-scanning in CI.
- Tests: PRs blocked until high-severity SAST issues fixed; no hardcoded secrets.
- Protect binaries and third-party components
- Maintain SBOM for all components and use automated dependency/CVE scanning.
- Pin dependency versions, maintain an approved-component list, and define patch policies.
- Tests: CI blocks builds with unpatched critical CVEs unless an approved exception exists.
- Protect firmware update mechanism (OTA)
- Require signed images, authenticated update servers, encrypted transport, and anti-rollback checks.
- Design staged rollouts and recovery for failed updates.
- Protect device identity & authentication
- Provision unique per-device keys/certs securely; use hardware-backed identity when possible.
- Support key rotation and revocation (short-lived certs or token renewal).
- Protect debug & manufacturing processes
- Secure manufacturing provisioning, protect factory keys, and require explicit factory unlock for debug.
- Protect CI/CD & build system - mainly against insiders mistakes or bad will
- Use signed and reproducible builds, manage pipeline secrets, and produce build provenance and SBOM.
- Protect logging, telemetry and privacy
- Sanitize logs, avoid PII and secrets, encrypt telemetry, and authenticate telemetry channels.
- Manage vulnerability & incident response
- Document disclosure process, patch management SLAs, orchestration for fleet updates, and incident playbooks.
- Deal with physical security and tamper resistance (as relevant)
- Consider tamper sensors, secure enclosures, potting, and side-channel mitigations where risk justifies cost.
Which processes does the above lead to?
Testing & verification pipeline:
- Static analysis (SAST) and secure-coding checks in CI.
- Dependency/CVE scanning and SBOM generation.
- Unit/integration tests for crypto, auth, error handling, and OTA flows.
- Fuzzing of parsers and protocol stacks.
- Penetration testing (network and physical where relevant).
- Hardware tests: RNG health, key extraction resistance, JTAG/debug resistance.
Acceptance criteria examples (testable):
- All sensitive communications use TLS1.3/DTLS with AEAD and forward secrecy (CI/integration test).
- Device refuses unsigned or downgraded firmware (end-to-end test).
- No secrets or credentials in repo or build artifacts (secret-scan in CI).
- All third-party components have SBOM and no critical unpatched CVEs per policy (CI).
- Device identity keys are stored in HW-backed keystore or verified inaccessible (hardware test).
- RNG/ TRNG passes self-test at boot (hardware test).
- Debug interfaces are disabled or require explicit factory unlock for production units (factory test).
Prioritization (MVP / Phase 2 / Long-term):
- Minimal Viable Product (must have before shipping)
- Signed firmware & secure boot.
- TLS for sensitive comms, basic key management, unique device identity, secrets not in repos.
- CI gating for SAST and dependency CVE checks.
- OTA over authenticated encrypted channel.
- Later improvements
- mTLS, short-lived certs & automated renewal, hardware-backed identity for all SKUs.
- SBOM + automated patch deployment & rollback orchestration.
- Hardened production builds (fuse JTAG, lock bootloader).
- Long-term (advanced)
- Remote attestation, supply-chain provenance, continuous behavioral monitoring, tamper/side-channel protections.
Common pitfalls (summary):
- Treating CRC as security integrity — use cryptographic MACs/AEAD instead.
- Hardcoding credentials, keys, or secrets in source or firmware.
- Allowing downgrade or unsigned update paths.
- Leaving debug enabled on production units.
- Blindly trusting third-party libs without SBOM and update plan.
- No plan for CVE tracking, patching, or incident response.
Developer checklist:
- Unique device identity provisioned at manufacture/provisioning
- Keys stored in secure element or protected area; firmware cannot reveal them
- Secure boot chain enabled; bootloader & firmware signed
- OTA updates signed, integrity-checked and rollback-protected
- All sensitive comms use TLS1.3/DTLS with AEAD + ECDHE
- No hardcoded secrets in repo or artifacts (secret-scan passed)
- SAST and dependency CVE scan pass in CI (blocking for critical)
- Debug/JTAG locked for production units or require explicit factory unlock
- SBOM generated for each release and CVE policy applied
- RNG/TRNG health check on boot
- Logging sanitized (no PII/keys) and telemetry encrypted/authenticated
- Incident response & patch plan documented
Terms and acronyms used above
| Term | Meaning |
|---|---|
| AEAD | Authenticated Encryption with Associated Data. AEAD is the modern way to both encrypt and authenticate messages safely. |
| AES-GCM | Advanced Encryption Standard - Galois/Counter Mode. Superceeded DES and triple-DES. |
| CA | Certificate Authority. An organization serving as common "root-of-trust". Some are installed with your OS and/or browser. |
| ChaCha20-Poly1305 | ChaCha20 stream cipher with Poly1305 message authentication |
| CERT | Computer Emergency Response Team (also refers to SEI CERT coding standards). |
| CoAP | Constrained Application Protocol. UDP-based protocol for resource-constrained devices. |
| CRC | Cyclic Redundancy Check. Can catch memory-overwrites and bit-errors in transmissions etc. - but not security relevant. See HMAC. |
| CRL | Certificate Revocation List. List of certificates that are not valid anymore. |
| CVE | Common Vulnerabilities and Exposures. Specific issues found in e.g., libraries. See more below. |
| DAST | Dynamic Application Security Testing (dynamic analysis). Specific Black-box tools doing e.g., SQL-injection. |
| DTLS | Datagram Transport Layer Security. Does for UDP what TLS does for TCP. |
| ECDHE | Elliptic Curve Diffie-Hellman Ephemeral. Implementation allowing for Forward Security. Used in BLE and many other protocols. |
| FIPS | Federal Information Processing Standards. Computer security standards from NIST. |
| HMAC | Hash-based Message Authentication Code. Based on symmetric shared key. Ensures integrity and authenticity with message in plaintext. |
| HSM | Hardware Security Module. HW Server that generates, stores and manages keys and signatures. May be used in production to assure individual certificates in devices. |
| JTAG | Joint Test Action Group (debug/test interface). Interface and protocol for testing chips on PCB. |
| JWT | JSON Web Token. Format for representing security claims. |
| MCU | Microcontroller Unit |
| MISRA | Motor Industry Software Reliability Association (coding guidelines). See more below. |
| mTLS | Mutual TLS (Mutual Transport Layer Security) |
| OCSP | Online Certificate Status Protocol. Way of establishing whether a given certificate is revoked. |
| OSCORE | Object Security for Constrained RESTful Environments. An IoT lightweight secure protocol for REST. |
| OTP | One-Time Programmable (memory/fuse) — context-dependent. Often used in production for e.g., serial-numbers. |
| OTA | Over-The-Air (updates). Wireless firmware updates. |
| PRNG | Pseudo-Random Number Generator. Not as random as TRNG. |
| PSK | Pre-Shared Key. Key is shared via other media - e.g., user typing or QR-code. |
| RNG | Random Number Generator. On a computer 'random' is surprisingly difficult. |
| RTC | Real-Time Clock. Hardware-based clock containing calendar-info. Normally battery driven. |
| RTOS | Real-Time Operating System. Discussed many places in this site. |
| SAST | Static Application Security Testing (static analysis). Aka Static Code Analysis. |
| SBOM | Software Bill of Materials. List of modules and libraries - with versions - used in a given release. |
| SCA | Software Composition Analysis. A way to generate an SBOM. |
| SNI | Server Name Indication. Client tells server the hostname it addresses. Allows for webhosting. |
| SWD | Serial Wire Debug. 2-wire ARM Standard for debugging microcontrollers |
| TLS | Transport Layer Security. Modern form of SSL that allows web-based privacy. |
| TPM | Trusted Platform Module. HW-solution for secure boot and encryption. |
| TRNG | True Random Number Generator. More random than pseudo-random generators. |
Codebase processes
Many of the above "protect"-actions can be implemented like "normal" features - e.g., the use of signed certificates and various keys in protocol handshakes. Some processes, however, are embedded in the daily work and are meant to protect the codebase. These are what we focus on now.
The figure below shows most of the processes an embedded developer may be involved with, when it comes to protecting the codebase
The left column in the figure is the main input. Standards and guidelines may be real input, whereas requirements in many organizations change along the way - hence the dashed arrow from the design activity. The center column is where many developers will spend most of their time. The right column contains output - such as documentation and source, but also activities related to vulnerabilities.Vulnerabilities
A vulnerability is an issue in your product that may be exploited by hackers - or can be triggered accidentally by normal use. Traditionally, in medical and transportation, "safety" has been the major (only) considered risk to users. However, these days all sectors need to (also) consider security. There are many different kind of vulnerabilities:
- Your product may become unusable. This is DOS - Denial of Service.
- Your product may be able to do unwanted things. Sometimes your product does not seem affected, but it may be used in a "BOT" attack on other products or infrastructure.
- Your product may work completely as intended - but it may be possible to use it to get private information about users. This is mainly considered a risk in the EU - regulated by the GDPR - but if we talk about e.g., leaked credit-card information, it will be a problem everywhere.
Vulnerabilities are not in the center column in the figure above, because dealing with these often require a lot of work from QA. marketing, sales and sometimes top-management. "Vulnerability Disclosure" is about informing users about the issue - should they put in on a shelf until there is a patch?, are there workarounds?, when will there be a patch? - if ever, and so on. "Vulnerability Reporting" is targeted authorities - incl. updating the various vulnerability databases, in case someone uses your product in theirs. The term "CVE" means Common Vulnerabilities and Exposures. This relates to databases that register these issues.
Weaknesses
Examples of Weaknesses is e.g., "Buffer overflow", "Memory Leakage", "Null-pointer assignment" etc. This is classical problems that most of us have met in our carrier. There are more or less advanced tools that can scan your software for these kinds of errors. This is not a test in the target product, but a scan of the source code on a PC. This is called "Static Code Analysis".
The term "CWE" means Common Weakness Enumeration. It is an eternal source of confusion that CWE and CVE look and sound almost the same, and deal with issues in the same domain - yet are very different.Secure Coding Standards
For most software developers the "Coding" phase in the above figure is where we want to be. Also here we see compliance demands. Many organizations require coding guidelines, normally inspired by one of the below organisations:
-
OWASP
Open Worldwide Application Security Project.
It is relevant to understand that the "W" in OWASP used to stand for "Web". As all other standards organisations, OWASP is expanding its reign. They have excellent rules, but much is related to user-interaction - like sql-injection - and Client-Server scenarios. Also a unix-like environment is often assumed. So not your average low-level embedded controller. However, since I am also interested in web-technology, I find it very interesting. -
SEI CERT
CERT = Computer Emergency Response Team, but SEI CERT is guidelines from the Software Engineering Institute at Carnegie Mellon University.
You will see that this is more guidelines than rules - and they often target a system with an OS (also RTOS) with sockets and files. SEI CERT was created for cyber security - but also safety. Several SEI CERT guidelines only have relevance in an environment with a filesystem, sockets ect. -
MISRA
Motor Industry Software Reliability Association.
Contrary to the two other organizations, MISRA was originally defined to be about safety. However - they also want to grow. MISRA has been heavily used in automotive - but also medical. Where the other two standards are mostly guidelines, MISRA has rules that can be "must (not)" or "should". MISRA is basically designed for a safe and secure setting (e.g., an Engine Control Unit), where programmers can make bad mistakes. MISRA comes from a world where small ECUs run a main program with a number of interrupts. A keyword here is "deterministic".
The right-side menu has links to the above three organizations. Note that while CERT and OWASP are easy to browse, MISRA wants money for their document. OWASP has an interesting Top-10 based on user input:
| Domain | Description |
|---|---|
| Broken Access Control | Users are allowed to act outside their intended permissions. |
| Cryptographic Failures | Missing or weak (homegrown) cryptography, old hash functions like MD5 (or even worse: CRC), insufficient randomness. |
| Injection | SQL or (unix-style) command injection as well as Cross-Site Scripting (XSS). |
| Insecure Design | Use threat-modeling, secure design patterns and reference architecture. |
| Security Misconfiguration | Highly configurable components are nice - but we need to know how to use them. |
| Vulnerable and Outdated Components | This is the above mentioned CVEs. |
| Identification and Authentication Failures | Used to be called "Broken Authentication". |
| Software and Data Integrity Failures | Insecure Software Updates and CI/CD pipelines. |
| Security Logging and Monitoring Failures | Do log failed login-attempts etc. (and throttle), but also do not disclose debug information to users. |
| Server-Side Request Forgery | Never trust URLs etc from users. |
I recently participated in a training program from Secure Code Warriers - with all the above guidelines and rules. It was surprisingly interactive and educational.
Standards and Requirements
I asked ChapGPT to provide an overview of the involved standards - and what they enforce. The below output (except from my remarks in first column) was the response. I am not sure it's much help...
| Standards | EU: Cyber Resilience Act (CRA) | EU: Radio Equipment Directive : Delegated Reg. 2022/30 (RED DA) | EU: NIS2 (Directive 2022/2555) | US: IoT Cybersecurity Improvement Act (2020) (Federal procurement) | US: FCC U.S. Cyber Trust Mark (IoT label) | US: FDA FD&C : 524B (medical devices) | US: OMB M‑22‑18 / M‑23‑16 + CISA Attestation (federal software) | US: California SB‑327 (consumer IoT) |
|---|---|---|---|---|---|---|---|---|
| Static
code analysis (SAST) SW is scanned at source-level - no running target |
Risk-based, not prescriptive; part of secure development expected. | Not prescriptive; meet essential requirements (e.g., network protection) : methods up to manufacturer. | Not prescriptive for products; focuses on org. risk mgmt. | Not prescriptive; follows NIST guidance for federal IoT buys. | Encouraged via NISTIR 8425 conformance; not named explicitly. | Expected as part of secure design evidence; not mandated by name. | Align with NIST SSDF practices; attestation of controls, not tool-specific. | |
| Dynamic testing (DAST) Classic tests - Unit/Integration/System |
Risk-based; acceptable means to verify requirements. | Not prescriptive; verification approach is up to manufacturer. | Not prescriptive for products. | Not prescriptive; NIST guidelines inform testing. | Implied in evaluation against criteria; not required by name. | Expected as part of verification evidence; not mandated by name. | Attest to secure dev practices; tool-agnostic. | |
| Software Composition Analysis (SCA) / SBOM Documenting SW-libraries etc |
SCA helpful; SBOM not strictly mandated in CRA text; vulnerability handling required. | Not prescriptive; components security must be managed. | Not product-focused; supports CVD eco‑system. | Encouraged via NIST guidelines; not universal requirement. | Often part of program criteria/registry details; not universally required. | SBOM explicitly required in submissions for 'cyber devices'. | SBOM/artifacts may be requested; attestation required (SSDF aligned). | |
| Fuzz testing Comm-interfaces tested with illegal packages |
Risk-based; not prescriptive. | Optional technique to meet criteria. | Useful evidence; not expressly mandated. | Optional per supplier practice; not required. | ||||
| Threat modeling & secure-by-design Architecture Analysis & Traceability |
Yes: security by design & risk assessment expected (Annex I). | Yes: meet essential requirements via risk analysis. | Yes: risk management measures for in-scope entities. | Encouraged by criteria; not named explicitly. | Yes: risk assessment and cybersecurity plan required. | Yes: NIST SSDF-aligned practices in attestation. | ||
| Vulnerability scanning (infra/app) Comparing SBOM with databases on known vulnerabilities in components used |
Part of vulnerability handling lifecycle; not prescriptive. | Implied by essential requirements. | Org-level measure; not product-specific. | Often part of evaluation; not mandatory by name. | Expected as part of monitoring & maintenance. | Tool-agnostic; control attestation. | ||
| Coordinated Vulnerability Disclosure (CVD) policy Rules for informing users about vulnerabilities |
Explicitly required for manufacturers. | Implied manufacturer responsibilities for vulnerabilities. | Explicitly addressed: Member States designate CVD coordinators. | Encouraged in NIST guidance and federal procurement baselines. | Required/encouraged through program criteria based on NISTIR 8425. | Expected policy/process for postmarket handling. | Vulnerability disclosure practices aligned to SSDF; attest. | |
| Incident & exploited-vuln reporting to authorities Rules for informing Authorities about exploited vulnerabilities |
Yes: report exploited vulns & severe incidents via ENISA platform; 24h initial, follow-ups. | Not applicable (no central authority reporting). | Yes: entity incident reporting to national CSIRTs (not product manufacturers per se). | Not applicable outside federal procurement; no central reporting. | No authority reporting; consumer labeling program. | Yes: submission content & postmarket expectations; engage FDA as required. | No central reporting; agencies collect attestations. | |
| Secure update & patching mechanism In-field updates of Software |
Yes: security updates during support period required. | Yes: protection against fraud/network harm implies update & patching capability. | Org-level continuity; not product mandate. | Patchability emphasized in NIST guidance for federal IoT. | Yes: program criteria include update policy/capabilities. | Yes: secure update mechanisms and maintenance plan required. | Addressed through secure development/maintenance practices. | Implied only via 'reasonable security'; not explicit. |
| Logging/monitoring & telemetry Collecting data on running products |
Required to detect/respond proportionately (risk-based). | Implied by essential requirements. | Yes: org-level detection & response measures. | Program criteria expect baseline logging/telemetry. | Yes: monitoring and logging addressed in guidance. | Covered under SSDF practice areas; attest. | ||
| Cryptography & secure communications Protecting data in transit & at rest |
Yes: state of the art protection for data & comms. | Yes: explicitly protect network & privacy. | Guidance-driven (no-hardcoded creds, secure comms). | Yes: criteria include secure comms & data protection. | Yes: requirements for encryption/authentication. | Addressed via SSDF-aligned practices; not prescriptive. | Yes: effectively bans default passwords; implies stronger auth. | |
| Secure default configuration (e.g., no default passwords) Role-based access on need-to-do basis |
Yes: secure-by-default expectation; ban on known-insecure defaults. | Yes: measures to prevent harm/fraud & protect privacy imply secure defaults. | Org-level; not product-specific. | Encouraged in NIST baselines (no hard-coded creds). | Yes: criteria expect strong default posture. | Yes: secure configuration expectations in submissions. | Yes: attest to secure configuration practices. | Explicit: unique passwords / no universal default passwords. |
| Penetration testing Using white-hat hackers |
Not mandated; best practice. | Often expected/recommended; not mandated. | ||||||
| Security documentation / technical file | Yes: technical documentation incl. support period info. | Yes: demonstrate conformity to essential requirements. | Policy/procedure documentation at org level. | Documentation per NIST guidance for procurement. | Documentation required for label registry/QR details. | Yes: submit cybersecurity documentation incl. SBOM. | Yes: supplier attestation & artifacts repository. | Not specified beyond general compliance. |
| Supplier attestation to secure development practices Documentation on all the above |
EU Declaration of Conformity & CE marking; not SSDF attestation. | Conformity assessment to RED DA; possible Notified Body. | Not relevant (operational directive). | Vendors must meet NIST baselines to sell to federal gov. | Third-party evaluation against program criteria. | Regulatory submission demonstrates compliance. | Yes: mandatory secure software development self‑attestation to agencies. | |