AutomotiveDesign engineeringElectronics

Functional safety for embedded systems – Virtual Roundtable (part 2 of 2)

Hosted by Jeff Shepard, EE World has organized this “virtual roundtable” into “Functional Safety for Embedded Systems.” Panelists include Anders Holmberg (AH), General Manager Embedded Development Tools with IAR Systems, Mike Dow (MD), Senior Product Manager, IoT Security with Silicon Labs, Jim McElroy (JM), VP of sales and marketing at LDRA Technology, Dave Hughes (DH), founder and CEO of HCC Embedded

JS:  What is the least understood aspect of designing functional safety in embedded systems and devices?

DH: In my opinion, there is far too much focus on static and dynamic analysis as the “be all and end all” of functional safety. These are just small parts of any functional safety-development process. By far, the most important thing is traceability. This is the glue the binds the whole rationale for the safety case together. From requirements, through design and implementation to test cases, the completeness of the traceability is what enables you to make a safety case.

Dave Hughes (DH), founder and CEO of HCC Embedded

JM: Least understood—the concept that functional safety is about designing automated safety and protection mechanisms that operate properly when the systems themselves may be operating in a degraded mode due to device or system failure or deliberate attack. Developing functionally safe systems requires solid programming practices as well as a different “protection-based” mindset. From there, developers also need to understand that functional safety is required at all levels—hardware, firmware, and software. With software, functional safety requires an auditable, traceable process, automated static and dynamic analysis, and measurable and adequate unit- and system-level testing.

MD: The biggest misunderstandings seem to be around standards that need to be followed for different equipment types. For home appliances, the main standard that must be complied with is the IEC 60730 – Automatic Electrical Controls, and for most other equipment, IEC 61508 with Safety Integrity Ratings (SIL) are used.

JS:  Is it necessary, better, or more efficient to use a coding standard such as MISRA C or MISRA C++ compared with an in-house developed standard to meet functional safety requirements?

AH: The various safety standards essentially mandate that you use a coding standard to reduce the complexity and ambiguity inherent in the C/C++ languages. Developing such a safe, secure, and maintainable coding standard from scratch is not something you typically do, as it requires extremely good knowledge of all the murkier corners of the language standards. The latest MISRA revisions are well thought out, field-proven, and strikes a balance between rigor and usability. So adopting MISRA as a base for a coding standard is a good first step.

Jim McElroy (JM), VP of sales and marketing at LDRA Technology

However, a standard like MISRA that deals only with language features does not cover other aspects of how you write and organize your code. So the safety standards advise you to cover such items in your own complementary coding standard. Such a standard can include things like rules for using and abusing global variables, including how to properly use the ‘volatile’ keyword, naming conventions, how you partition your code into modules, version control hygiene, etc.

The thing with a coding standard like this is not to always follow it to the letter, but to set you up with a thought-out system for how you should work. Deviations are always possible, as long as you do so consciously, with a clear purpose, and document what you do.

And your guidelines should have a section on how to deal with and sign off deviations.

JM: Some of the industry’s best minds have worked together to create off-the-shelf coding standards such as MISRA C and MISRA C++ that address best programming practices to help ensure the development of safe and secure software. These standards serve as readily available best practices supported by industry-proven tools to ensure coding standards compliance. Efficiency is most certainly improved by leveraging COTS tools and commercial standards. Most assuredly, leveraging these standards will result in higher quality code and are therefore better than no standard at all. That said, depending on the application, the MISRA standard also enables deviation with justification, supporting the concept that the standard itself can be tailored based on the specific need of the application. As well, companies develop In-house standards to address their needs and the applications they produce. A best practice: leverage industry available standards, and as necessary, integrate rules from the in-house standard without deprecating the intent of the commercial standard without justification.

Mike Dow, (MD) Senior Product Manager, IoT Security with Silicon Labs

MD: I would say that if you don’t already have a coding methodology that is geared around producing functionally safe code, I would use MISRA or other similar functional safety coding standards. At least you know those methodologies have gone through a great deal of vetting. If you code on your own, the odds of you getting it right the first time or even the 20th time are slim.

DH: First, the specifics of the coding language used are a tiny part of any attempt to achieve functional safety. You only need to check how much of any particular safety standard is devoted to this subject to realize this. Coding should be a fairly mechanical step in implementing what has been designed and should be done using a language suitable for the task. MISRA C or C++ are options, but in my opinion, MISRA C is unnecessarily complex for functional safety use cases. Simplicity is a big factor in achieving functional safety and keeping the cost of the development down. This is a big subject, but as a taster, try to take a few hundred lines of MISRA-compliant C code validated by one tool and see what happens when you try to use another validation tool. It should not be so complex. It makes these projects far more expensive than they should be.

JS: How can developers improve functional safety testing methodologies when developing embedded systems?

AH: The safety standard in themselves has a bit to say on this topic, so reading up on what the selected standard discusses and recommends for testing and verification and cherry-picking the bits and pieces that make sense for you and your stakeholders is a very good start. In the space and aviation industry, there’s the saying “test what you fly – fly what you test”. This can be interpreted in many ways, but one takeaway is to be rigorous with how and what you test. Always make sure that what you are shipping is exactly what you tested. Establish testing practices early on and make sure that the system-level test is done on code built in exactly the same way as you build the release.

Read up on the ecosystem for testing-tools and associated hardware and consider investing in tools that make sense for your problem domain. It might feel like a big burden to adopt new tooling, but if it helps you identify just a small number of otherwise hard to find issues, the time and investment will pay off.

Further, just because you have a golden build configuration that you always test does not mean that you cannot build and test totally different configurations to try to provoke your code to fall over. A simple example can be to force the stack size to be so small that you are likely to provoke a stack overrun, to make sure you handle that gracefully. Another can be to build your code with maximum optimization and no optimization. If your tests behave differently, like missing a deadline when the code is not optimized, you have pinpointed an issue that should be on your radar going forward.

Anders Holmberg (AH), General Manager Embedded Development Tools with IAR Systems

DH: Embedded systems developers should not improve functional safety testing methodologies. A developer’s job is to follow the safety development process in the development plan using the methodologies specified in the plan. The functional safety development plan should assess what testing methodologies are suitable for the level of safety required and specify these.

JM: To improve functional safety testing methodologies, developers should leverage automation for requirements traceability to functional safety and security standards, static and dynamic analysis, as well as unit and system testing. Today, more organizations, even in the embedded space, are considering methodologies that use continuous integration/continuous deployment for more rapid development, testing, and deployment. In practice, automation techniques are used to help them develop high-quality code faster, measure the quality of that code, and integrate it with the rest of the system.

MD: The most important thing developers can do to improve testing methodologies is a document, train, and audit your processes for tool usage and test procedures.

JS: What is the most important feature designers should look for in an integrated development environment as it relates to functional safety and security?

JM: Developers leveraging an integrated development environment for functional safety, and security must have the ability to automate and demonstrate a completely traceable and auditable workflow. This workflow must go from requirements through verification and deployment that is compliant with industry-defined functional safety and security standards.

DH: Tools specification is part of the functional safety project planning. Tools should be selected based on whether they can meet the safety requirements of that plan, and different safety standards have different ways of assessing their suitability. IDEs are one of the tools that must be assessed within this framework.

AH: When selecting a toolchain and IDE for use in safety-critical projects, it can be beneficial to look for tools that are certified for your standard and integrity level. You can select a non-certified tool, but the work to justify that selection in compliance with the standard is rather big, is typically not re-usable in your next project, and requires a skill set that is a bit different from software design and coding.

An IDE that is easy to integrate with static analysis checkers and dynamic testing tools, or have such tools built-in, will always be easier to work with in the long run. Similarly, a well-integrated and feature-rich debugger will most likely ease the pain when you need to untangle a thorny problem.

JS: Thanks to our panelists for sharing their insights and experience! You might also be interested in reading Security for Embedded Systems” – Virtual Roundtable (part 1 of 2).

You may also like: