\documentclass[12pt,a4paper,notitlepage]{report} \usepackage[utf8]{inputenc} \usepackage[a4paper,textwidth=17cm, top=2cm, bottom=3.5cm]{geometry} \usepackage[T1]{fontenc} \usepackage[ backend=biber, style=numeric, natbib=true, url=true, doi=true, eprint=false ]{biblatex} \addbibresource{safety_reset.bib} \usepackage{amssymb,amsmath} \usepackage{listings} \usepackage{eurosym} \usepackage{wasysym} \usepackage{amsthm} \usepackage{tabularx} \usepackage{multirow} \usepackage{multicol} \usepackage{tikz} \usepackage{mathtools} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\paren}{(}{)} \usetikzlibrary{arrows} \usetikzlibrary{chains} \usetikzlibrary{backgrounds} \usetikzlibrary{calc} \usetikzlibrary{decorations.markings} \usetikzlibrary{decorations.pathreplacing} \usetikzlibrary{fit} \usetikzlibrary{patterns} \usetikzlibrary{positioning} \usetikzlibrary{shapes} \usepackage[binary-units]{siunitx} \usepackage{hyperref} \usepackage{tabularx} \usepackage{commath} \usepackage{graphicx,color} \usepackage{subcaption} \usepackage{float} \usepackage{footmisc} \usepackage{array} \usepackage[underline=false]{pgf-umlsd} \usetikzlibrary{calc} %\usepackage[pdftex]{graphicx,color} \usepackage{epstopdf} \usepackage{pdfpages} \usepackage{minted} % pygmentized source code % Needed for murks.tex \usepackage{setspace} \usepackage[draft=false,babel,tracking=true,kerning=true,spacing=true]{microtype} % optischer Randausgleich etc. % For german quotation marks \newcommand{\foonote}[1]{\footnote{#1}} \newcommand{\degree}{\ensuremath{^\circ}} \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \usepackage{fancyhdr} \fancyhf{} \fancyfoot[C]{\thepage} \newcommand{\includenotebook}[2]{ \fancyhead[C]{Included Jupyter notebook: #1} \includepdf[pages=1, pagecommand={\thispagestyle{fancy}\section{#1}\label{#2_notebook}} ]{resources/#2.pdf} \includepdf[pages=2-, pagecommand={\thispagestyle{fancy}} ]{resources/#2.pdf} } \begin{document} % Beispielhafte Nutzung der Vorlage für die Titelseite (bitte anpassen): \input{murks} \titelen{A Post-Attack Recovery Architecture for Smart Electricity Meters} \titelde{Eine Architektur zur Kontrollwiederherstellung nach Angriffen auf Smart Metering in Stromnetzen} \typ{Masterarbeit} \grad{Master of Science (M. Sc.)} \autor{Jan Sebastian Götte} \gebdatum{Aus Datenschutzgründen nicht abgedruckt} % Geburtsdatum des Autors \gebort{Aus Datenschutzgründen nicht abgedruckt} % Geburtsort des Autors \gutachter{Prof. Dr. Björn Scheuermann}{Prof. Dr.-Ing. Eckhard Grass} \mitverteidigung % entfernen, falls keine Verteidigung erfolgt %FIXME \makeTitel \selbstaendigkeitserklaerung{31.03.2020} \newpage % Hier folgt die eigentliche Arbeit (bei doppelseitigem Druck auf einem neuen Blatt): \tableofcontents \newpage \chapter{Introduction} \section{Structure and operation of the electrical grid} \subsection{Structure of the electrical grid} \subsubsection{Generators and loads} \subsubsection{Transformers} \subsubsection{Tie lines} \subsection{Operational concerns} \subsubsection{Modelling the electrical grid} \subsubsection{Generator controls} \subsubsection{Load shedding} \subsubsection{System stability} \subsubsection{Power System Stabilizers} \subsubsection{Smart metering} \section{Smart meter technology} \subsubsection{Common components} Smart meters usually are built around a standard microcontroller. \label{sm-cpu} \subsubsection{Cryptographic coprocessors} \subsubsection{Physical structure} \subsubsection{Physical installation} \section{Regulatory frameworks around the world} \subsection{International standards} \subsection{The regulatory situation in selected countries} \subsubsection{Germany} \subsubsection{France} \subsubsection{the UK} \subsubsection{Italy} \subsubsection{Northern America} \subsubsection{Japan} \subsection{Common themes} \section{Security in smart grids} The smart grid in practice is nothing more or less than an aggregation of embedded control and measurement devices that are part of a large control system. This implies that all the same security concerns that apply to embedded systems in general also apply to most components of a smart grid in some way. Where programmers have been struggling for decades now with input validation\cite{leveson01}, the same potential issue raises security concerns in smart grid scenarios as well\cite{mo01, lee01}. Only, in smart grid we have two complicating factors present: Many components are embedded systems, and as such inherently hard to update. Also, the smart grid and its control algorithms act as a large (partially-)distributed system, making problems such as input validation or authentication difficult to implement\cite{blaze01} and adding a host of distributed systems problems on top\cite{lamport01}. Given that the electrical grid is a major piece of essential infrastructure in modern civilization, these problems amount to significant issues in practice. Attacks on the electrical grid may have grave consequences\cite{lee01} all the while the long maintenance cycles of various components make the system slow to adapt. Thus, components for the smart grid need to be built to a much higher standard of security than most consumer devices to ensure they live up to well-funded attackers even decades down the road. This requirement intensifies the challenges of embedded security and distributed systems security among others that are inherent in any modern complex technological system. A point we will not consider in much depth is theft of electricity. A large part of the motivation of the introduction of smart meters seems to be % TODO weak statement to reduce the level of fraud by consumers. Academic papers tend to either focus on other benefits such as generation efficiency gains through better forecasting or try to rationalize the funamentally anti-consumer nature of smart metering with strenuous claims of ``enormous social benefits''\cite{mcdaniel01}. We will entirely focus on grid stability and discard electricity theft in the context of this paper for two reasons: One, billing inaccuracies of electricity companies are of very low urgency compared to grid stability, and the one is a precondition for the other. Two, utility companies can already put strong bounds on the amount of theft by simply cross-refrencing meter readings against trusted readings from upstream sections of the grid. This capability works even without smart meters and only gains speed from smart meters, just as the old exploit of bypassing the meter with a section of wire can't be prevented like this. Due to these bounds on its volume, electricity theft using smart meter hacking would not scale. Hackers would simply be rooted up one by one with no damage to consumers and very limmited damage to utility companies. Damage in these scenarios would be a far cry from the efficiency of an exponentially growing botnet. \subsection{Smart grid components as embedded devices} A fundamental challenge in smart grid implementations is the central role smart electricity meters play. Smart meters are used both for highly-granular load measurement and (in some countries) load switching\cite{zheng01}. Smart electricity meters are effectively consumer devices. They are built down to a certain price point that is measured by the burden it puts on consumers and that is generally fixed by regulatory authorities. % FIXME cite This requirement precludes some hardware features such as the use of a standard hardened software environment on a high-powerded embedded system (such as a hypervirtualized embedded linux setup) that would both increase resilience against attacks and simplify updates. Combined with the small market sizes in smart grid deployments \footnote{ Most vendors of smart electricity meters only serve a handful of markets. For the most part, smart meter development cost lies in the meter's software % TODO cite? There exist multiple competing standards applicable to various parts of a smart electricity meter. In addition, most countries have their own certification regimen\cite{cenelec01}. This complexity creates a large development burden for new market entrants\cite{perez01}. } this produces a high cost pressure on the software development process for smart electricity meters. \subsection{The state of the art in embedded security} Embedded security generally is much harder than security of higher-level systems. This is due to a combination of the unique constraints of embedded devices (hard to update, usually small quantity) and their lack of capabilities (processing power, memory protection functions, user interface devices). Even very well-funded companies continue to have serious problems securing their embedded systems. A spectacular example of this difficulty is the recently-exposed flaw in Apple's iPhone SoC first-stage ROM bootloader\footnote{ Modern system-on-chips integrate one or several CPUs with a multitude of peripherals, from memory and DMA controllers over 3D graphics accelerators down to general-purpose IO modules for controlling things like indicator LEDs. Most SoCs boot from one of several boot devices such as flash memory, ethernet or USB according to a configuration set e.g. by connecting some SoC pins a certain way or set by device-internal write-only fuse bits. Physically, one of the processing cores of the SoC (usually one of the main CPU cores) is connected such that it is taken out of reset before all other devices, and is tasked with switching on and configuring all other devices of the SoC. In order to run later intialization code or more advanced bootloaders, this core on startup runs a very small piece of code hard-burned into the SoC in the factory. This ROM loader initializes the most basic peripherals such as internal SRAM memory and selects a boot device for the next bootloader stage. Apple's ROM loader performs some authorization checks, to ensure no unauthorized software is loaded. The present flaw allows an attacker to circumvent these checks, booting code not authorized by Apple on a USB-connected iPhone, compromising Apple's chain of trust from ROM loader to userland right at its root. }, that allows a full compromise of any iPhone before the iPhone X. iPhone 8, one of the affected models, is still being manufactured and sold by Apple today\footnote{ i.e. at the time this paragraph was written, on %FIXME }. In another instance, Samsung put a flaw in their secure-world firmware used for protection of sensitive credentials in their mobile phone SoCs in % FIXME year % . If both of these very large companies have trouble securing parts of their secure embedded software stacks measuring a mere few hundred bytes in Apple's case or a few kilobytes in Samsung's, what is a smart electricity meter manufacturer to do? For their mass-market phones, these two companies have R\&D budgets that dwarf some countries' national budgets. % FIXME hyperbole? % FIXME cite Since thorough formal verification of code is not yet within reach for either large-scale software development or code heavy in side-effects such as embedded firmware or industrial control software\cite{pariente01} the two most effective measures for embedded security is reducing the amount of code on one hand, and labour-intensively checking and double-checking this code on the other hand. A smart electricity manufacturer does not have a say in the former since it is bound by the official regulations it has to comply with, and will almost certainly not have sufficient resources for the latter. % FIXME expand? % FIXME cite some figures on code size in smart meter firmware? \subsection{Attack avenues in the smart grid} If we model the smart grid as a control system responding to changes in inputs by regulating outputs, on a very high level we can see two general categories of attacks: Attacks that directly change the state of the outputs, and attacks that try to influence the outputs indirectly by changing the system's view of its inputs. The former would be an attack such as one that shuts down a power plant to decrease generation capacity. The latter would be an attack such as one that forges grid frequency measurements where they enter a power plant's control systems to provoke increasing oscillation in the amount of power generated by the plant according to the control systems' directions. % FIXME cite % FIXME expand \subsubsection{Communication channel attacks} Communication channel attacks are attacks on the communication links between smart grid components. This could be attacks on IP-connected parts of the core network or attacks on shared busses between smart meters and IP gateways in substations. Generally, these attacks can be mitigated by securing the aforementioned communication links using modern cryptography. IP links can be protected using TLS, and more low-level busses can be protected using more lightweight Noise\cite{perrin01}-based protocols. Cryptographic security transforms an attackers ability to manipulate communication contents into a mere denial of service attack. Thus, in addition to cryptographic security safety under DoS conditions must be ensured to ensure continued system performance under attacks. This safety property is identical with the safety required to withstand random outages of components, such as communications link outages due to physical damage from storms, flooding etc. % FIXME cite papers on attack impact, on coutermeasures and on attack realization In general, attacks at the meter level may be hard to weaponize % may be -> weak statement? since meters are used mostly for billing and forecasting purposes % FIXME cite and for more critical grid control purposes there exist several additional layers of sensors above smart meters that limit how much an attacker can falsify smart meter readings without the manipulation being obvious. In order for an attack to have more far-reaching consequences the attacker would need to compromise additional grid infrastructure\cite{kim01,kosut01}. \subsubsection{Exploiting centralized control systems} The type of smart grid attack most often cited in popular discourse, and to the author's knowledge % FIXME verify, cite the only type that has so far been conducted in practice, is a direct attack on centralized control systems. In this attack, computer components of control systems are compromised by the same techniques used to compromise any other kind of computer system such as exploiting insecure services running on internet-exposed ports and using one compromised system to compromised other systems connected with it through an ostensably secure internal network. These attacks are very powerful as they yield the attacker direct control over whatever outputs the control systems are controlling. If an attacker manages to compromise a power stations control computers, they may be able to influence generation output or even cause an emergency shutdown. % FIXME Despite their potentially large impact, these attacks are only moderately interesting from a scientific perspective. For one, their mitigation mostly consists of a straightforward application of security practices well-known for decades. Though there is room for the implementation of genuinely new, application-specific security systems in this field, the general state of the art is lacking behind the rest of the computer industry such that the low-hanging fruit should take priority. % FIXME cite this bold claim very properly In addition, given political will these systems can readily be secured since there is only a comparatively small number of them and driving a technician to every one of them in turn to install some security update is perfectly feasible. \subsubsection{Control function exploits} Control function exploits are attacks on the mathematical control loops used by the centralized control system. One example of such an attack would be resonance attacks as described in \textcite{wu01}. In this kind of attack, inputs from peripheral sensors indicating grid load to the centralized control system are carefully modified to cause a disproportionally large oscillation in control system action. This type of attack relies on complex resonance effects that arise when mechanical generators are electrically coupled. These resonances, coloquially called ``modes'' are well-studied in power system engineering\cite{rogers01,grebe01,entsoe01}. % FIXME: refer to section on stability control above here Even disregarding modern attack scenarios, for stability electrical grids are designed with measures in place to dampen any resonances inherent to grid structure. Still, requiring an accurate grid model these resonances are hard to analyze and unlikely to be noiticed under normal operating conditions. Mitigation of these attacks is most easily done by on the one hand ensuring unmodified sensor inputs to the control systems in the first place, and on the other hand carefully designing control systems not to exhibit exploitable behavior such as oscillations. % FIXME cite mitigation approaches \subsubsection{Endpoint exploits} One rather interesting attack on smart grid systems is one exploiting the grid's endpoint devices such as smart electricity meters\footnote{ Though potentially this could also aim at other kinds of devices distributed on a large scale such as sensors in unmanned substations. % FIXME cite verify } These meters are deployed on a massive scale, with several thousand meters deployed for every substation. % FIXME cite (this should be straightforward) Thus, once compromised restoration to an uncompromised state can be potentially very difficult if it requires physical access to thousands of devices hidden inaccessible in private homes. By compromising smart electricity meters, an attacker can trivially forge the distributed energy measurements these devices perform. In a best-case scenario, this might only affect billing and lead to customers being under- or over-charged if the attack is not noticed in time. However, in a less ideal scenario the energy measurements taken by these devices migth be used to inform the grid centralized control systems % FIXME cite and a falsification of these measurements might lead to inefficiency. In some countries and for some customers, these smart meters have one additional function that is highly useful to an attacker: They contain high-current load switches to disconnect the entire household or business in case electricity bills are left unpaid for a certain period. In countries that use these kinds of systems, the load disconnect is often simply hooked up to one of the smart merter's central microcontroller's general-purpose IO pins, allowing anyone compromising this microcontroller's firmware to actuate the load switch at will. % FIXME validate cite add pictures Given control over a large number of network-connected smart meters, an attacker might thus be able to cause large-scale disruptions of power consumption by repeatedly disconnecting and re-connecting a large number of consumers. % FIXME cite some analysis of this Combined with an attack method such as the resonance attack from \textcite{wu01} that was mentioned above, this scenario poses a serious danger to grid stability. % FIXME add small-scale load shedding for heaters etc. \subsection{Attacker models in the smart grid} \subsection{Practical attacks} \subsection{Practical threats} \subsection{Conclusion, or why we are doomed} We can conclude that a compromise of a large number of smart electricity meters cannot be ruled out. The complexity of network-connected smart meter firmware makes it exceedingly unlikely that it is in fact flawless. Large-scale deployments of these devices under some circumstances such as where they are used with load disconnect relays make them an attractive target for attackers interested in causing grid instability. The attacker model for these devices very definitely includes enemy states, who have considerable resources at their disposal. For a reasonable guarantee that no large-scale compromises of hard- and software built today will happen over a span of some decades, we would have to radically simplify its design and limit attack surface. Unfortunately, the complexity of smart electricity meter implementations mostly stems from the large list of requirements these devices have to conform with. Additionally, standards have already been written and changes that reduce scope or functionality have become exceedingly unlikely at this point. A general observation with smart grid systems of any kind is that they comprise a zealous departure of the decentralized control structure of yesterday's dumb grid and the advent of centralization at an enormous scale. This modern, centralized infrastructure has been carefully designed to defend against malicious actors%FIXME cite and all involved parties have an interest in keeping it secure. Still, like in any other system this centralization also makes a very attractive target for attackers since an attacker can likewise employ this centralized control to their goals. Fundamentally, decentralized systems tend to make attacks of any kind a lot more costly and one might question whether security has truly been gained during smart grid rollout. % FIXME hot take maybe \chapter{Restoring endpoint safety in an age of smart devices} If as layed out in the previous paragraph we cannot rule out a large-scale compromise of smart energy meters, we have to rephrase our claim to security. If we cannot rule out exploitation, we have to limit its impact. If we assume that we cannot strip any functionality from smart meters since it may be required by standards or for enormous social benefits\cite{mcdaniel01} % FIXME is sarcasm ok here? all we can do is to flush out an attacker once they are in. In a worst-case scenario an attacker would gain unconstrained code execution e.g. by exploiting a flaw in a network protocol implentation. Since smart meters use standard microcontrollers that do not have advanced memory protection functions (see pg. \ref{sm-cpu}), at this point we can assume the attacker has full control over the main microcontroller. With this control they can actuate the load switch if present, transmit data through the device's communication interfaces or use the user interface components such as LEDs and the LCD. Using the self-programming capabilities of modern flash microcontrollers, an attacker may even gain persistency without much trouble. Note that in systems separating cryptographic functions into some form of cryptographic module such as systems used in Germany % TODO list other countries as well? FIXME cite BSI standard requiring this we can be optimistic and assume the attacker has not in fact compromised this cryptographic co-processor yet and does not have access to any cryptographic secrets yet. Given that the attacker has complete control over the meter's core microcontroller and given that due to cost constraints we are bound to use whatever microcontroller the meter OEM has chosen for their design, we cannot rely on software running on the core mircocontroller to restore system integrity. Our solution to this problem is to add another, very small microcontroller to the smart meter design. This microcontroller will contain a small piece of software to receive cryptographically authenticated commands from utility companies and on demand reset the meter's core microcontroller to a known-good state. We have to assume the code in the core controller's flash memory has been compromised, so our only option to flush out an attacker is to re-program the core microcontroller in its entirety. We propose using JTAG to re-program the core microcontroller % TODO get terminology consistent. Is "core microcontroller" a good term here? with a known-good firmware image read from a sufficiently large SPI flash connected to the reset controller. JTAG is supported by most microcontrollers complex enough to end up in a smart meter design % TODO colloquialism and given adequate documentation JTAG programming functionality can be ported to new microcontrollers with relatively little work. On the microcontroller side our solution requires the JTAG interface to be activated (i.e. not fused-shut) and for our solution to work core microcontroller firmware must not be able to permanently disable the JTAG interface from within. In microcontrollers that do not yet provide this functionality this is a minor change that could be added to a custom microcontroller variant at low cost. On most microcontrollers keeping JTAG open should not interfere with code readout protection. Code secrecy should be of no concern\cite{schneier01} here but besides security manufacturers have strong preferences about this due to fear of copyright infringement. \section{The theory of endpoint safety} \label{sec_criteria} In order to gain anything by adding our reset controller to the smart meter's already complex design we must satisfy two interrelated conditions. \begin{enumerate} \item \textsc{security} means our reset controller itself does not have any remotely exploitable flaws \item \textsc{safety} menas our reset controller will perform its job as intended \end{enumerate} Note that our \textsc{security} property includes only remote exploitation, and excludes any form of hardware attack. Even though most smart meters provide some level of physical security, we do not wish to make any assumptions on this. In the following section we will elaborate our attacker model and it will become apparent that sufficient physical security to defend against all attackers in our model would be infeasible, and thus we will design our overall system to remain secure even assuming some number of physically compromised devices. % FIXME expand \subsection{Attack characteristics} The attacker model these two conditions must hold under is as follows. We assume three angles of attack: Attacks by the customer themselves, attacks by an insider within the metering systems controlling utility company and lastly attacks from third parties. Examples for these third parties are hobbyist hackers or outside cyber-criminals on the one hand, but also other companies participating in the smart grid infrastructure besides the utility company such as intermediary providers of meter-reading services. Due to the critical nature of the electrical grid, we have to include hostile state actors in our attacker model. When acting directly, these would be classified as third-party attackers by the above schema, but they can reasonably be expected to be able to assume either of the other two roles as well e.g. through infiltration or bribery. \textcite{fraunholz01} in their elaboration of their generalized attacker model give some classification of attackers and provide a nice taxonomy of attacker properties. In their threat/capability rating, criminals are still considered to have higher threat rating than state-sponsored attackers. The New York Times reported in 2016 that some states recruit their hacking personnel in part from cyber-criminals. If this report is true, in a worst-case scenario we have to assume a state-sponsored attacker to be the worst of both types. Comparing this against the other attacker types in \textcite{fraunholz01}, this state-sponsored attacker is strictly worse than any other type in both variables. We are left with a highly-skilled, very well-funded, highly intentional and motivated attacker. Based on the above classification of attack angles and our observations on state-sponsored attacks, we can adapt \textcite{fraunholz01} to our problem, yielding the following new attacker types: \begin{enumerate} \item \textbf{Utility company insiders controlled by a state actor} We can ignore the other internal threats described in \textcite{fraunholz01} since an insider cooperating with a state actor is strictly worse in every respect. \item \textbf{State-sponsored external attackers} A state actor can obviously directly attack the system through the internet. \item \textbf{Customers controlled by a state actor} A state actor can very well compromise some customers for their purposes. They might either physically infiltrate the system posing as legitimate customers, or they might simply deceive or bribe existing customers into cooperation. \item \textbf{Regular customers} Though a hostile state actor might gain control of some number of customers through means such as voluntary cooperation, bribery, infiltration, they are limited in attack scale since they do not want to arouse premature attention. Though regular customers may not have the motivation, skill or resources of a state-sponsored attacker, potentially large numbers of them may try to attack a system out of financial incentives. To allow for this possibility, we consider regular customers separate from state actors posing as customers in some way. \end{enumerate} \subsection{Overall structural system security} Considering overall security, we first introduce the \emph{reset authority}, a trusted party acting as the single authority for issuing reset commands in our system. In practice this trusted party may be part of the utility company, part of an external regulatory body or a hybrid setup requiring both to cooperate. We assume this party will be designed to be secure against all of the above attacker types. The precise design of this trusted party is out of scope for this work but we will list some practical suggestions on how to achieve security below. % FIXME do the list % FIXME put up a large box on this limitation Using an asymmetric cryptographic design centered around the \emph{reset authority}, we rule out all attacks except for denial-of-service attacks on our system by any of the four attacker types. All reset commands in our system originate from the \emph{reset authority} and are cryptographically secured to provide authentication and tamper detection. Under this model, attacks on the electrical grid components between the \emph{reset authority} and the customer device degrade into man-in-the-middle attacks. To ensure the \textsc{safety} criterion from \ref{sec_criteria} holds we must % FIXME check whether this \ref displays as intended make sure our cryptography is secure against man-in-the-middle attacks and we must try to harden the system against denial-of-service attacks by the attacker types listed above. Given our attacker model we cannot fully guard against this sort of attack but we can at least choose a commmunication channel that is resilient against denial of service attacks under the above model. Finally, we have to consider the issue of hardware security. We will solve the problem of physical attacks on some small number of devices by simply not programming any secret information into these devices. This also simplifies hardware production. From consideration in this work we explicitly rule out any form of supply-chain attack as out-of-scope. % FIXME include considerations on production testing somewhere (is the device working? is the right key programmed?) \subsection{Complex microcontroller firmware} The \textsc{security} property from \ref{sec_criteria} is in a large part reliant on the security of our reset controller firmware. The best method to increase firmware security is to reduce attack surface by limiting external interfaces as much as possible and by reducing code complexity as much as possible. % FIXME formalize this as something like "Design Goal DG-023-42-1" ? If we avoid the complexity of most modern microcontroller firmware we gain another benefit beyond implicitly reduced attack surface: If the resulting design is small enough we may attempt formal verification of our security property. Though formal verification tools are not yet suitable for highly complex tasks they are already barely adequate for small amounds of code and simple interfaces. \subsection{Modern microcontroller hardware} Microcontrollers have gained enormously in both performance/efficiency as well as in peripheral support. Alas, these gains have largely been driven by insatiable customer demand for faster, more powerful chips and for a long time security has not been considered important outside of some specific niches such as smartcards. Traditionally a microcontroller would spend its entire lifetime without ever being exposed to any networks. Though this trend has been reversing with the increasing adoption of internet-of-things things and more advanced security features have started appearing in general-purpose microcontrollers, most still lack even basic functionality found in processors for computers or smartphones. One of the components lacking from most microcontrollers is strong memory protection or even a memory mapping unit as it is found in all modern computer processors and SoCs for applications such as smartphones. Without an MPU/MPU some mitigations for memory safety violations cannot be implemented. This and the absence of virtualization tools such as ARM's TrustZone make hardening microcontroller firmware a big task. It is very important to ensure memory safety in microcontroller firmware through tools such as defensive coding, extensive testing and formal verification. In our design we achieve simplicity on two levels: One, we isolate the very complex metering firmware from our reset controller by having both run on separate microcontrollers. Two, we keep the reset controller firmware itself extremely simple to reduce attack surface there. \subsection{Regulatory and economical constraints} %FIXME \subsection{Safety vs. Security: Opting for restoration instead of prevention} %FIXME \subsection{Technical outline of a safety reset system} %FIXME \section{Communication channels on the grid} There is a number of well-established technologies for communication on or along power lines. We can distinguish three basic system categories: Systems using separate wires (such as DSL over landline telephone wiring), wireless radio systems (such as LTE) and \emph{powerline communication} (PLC) systems that re-use the existing mains wiring and superimpose data transmissions on the 50 Hz mains sine\cite{gungor01,kabalci01}. For our scenario, we will ignore short-range communication systems. There exists a large number of \emph{wideband} powerline communication systems that are popular with consumers for bridging ethernet between parts of an apartment or house. These systems transmit at up to several hundred megabits over distances up to several tens of meters\cite{kabalci01}. Technologically, these wideband PLC systems are very different from \emph{narrowband} systems used by utilities for load management among other applications and they are not relevant to our analysis. \subsection{Powerline communication (PLC) systems and their use} In long-distance communications for applications such as load management, PLC systems are attractive since they allow re-using the existing wiring infrastructure and have been used as early as in the 1930s\cite{hovi01}. Narrowband PLC systems are a potentially low-cost solution to the problem of transmitting data at small bandwidth over distances of several hundred meters up to tens of kilometers. Narrowband PLC systems transmit on the order of kilobits per second or slower. A common use of this sort of system are \emph{ripple control} systems. These systems superimpose a low-frequency signal at some few hundred Hertz carrier frequency on top of the 50Hz mains sine. This low-frequency signal is used to encode switching commands for non-essential residential or industrial loads. Ripple control systems provide utilities with the ability to actively control demand while promising small savings in electricity cost to consumers\cite{dzung01}. In any PLC system there is a strict tradeoff between bandwidth, power and distance. Higher bandwidth requires higher power and reduces maximum transmission distance. Where ripple control systems usually use few transmitters to cover the entire grid of a regional distribution utility, higher-bandwidth bidirectional systems used for automatic meter reading (AMR) in places such as italy or france require repeaters within a few hundred meters of a transmitter. \subsection{Landline and wireless IP-based systems} Especially in automated meter reading (AMR) infrastructure the cost-benefit tradeoff of powerline systems does not always work out for utilities. A common alternative in these systems is to use the public internet for communication. Using the public internet has the advantage of low initial investment on the part of the utility company as well as quick commissioning. Disadvantages compared to a PLC system are potentially higher operational costs due to recurring fees to network providers as well as lower reliability. Being integrated into power grid infrastructure, a PLC system's failure modes are highly correlated with the overall grid. Put briefly, if the PLC interface is down, there is a good chance that power is out, too. In contrast to this general internet services exhibit a multitude of failures that are entirely decorrelated from power grid stability. For purposes such as meter reading for billing purposes, this stability is sufficient. However for systems that need to hold up in crisis situations such as the recovery system we are contemplating in this thesis, the public internet may not provide sufficient reliability. \subsection{Proprietary wireless systems} % FIXME \subsection{Frequency modulation as a communication channel} For our system, we chose grid frequency modulation (henceforth GFM) as a low-bandwidth uni-directional broadcast communications channel. Compared to traditional PLC GFM requires only a small amount of additional hardware, works reliably throughout the grid and is harder to manipulate by a malicious actor. Grid frequency in europe's synchronous areas is nominally 50 Hertz, but there are small load-dependent variations from this nominal value. Any device connected to the power grid (or even just within physical proximity of power wiring) can reliably and accurately measure grid frequency at low hardware overhead. By intentionally modifying grid frequency, we can create a very low-bandwidth broadcast communication channel. Grid frequency modulation has only ever been proposed as a communications channel at very small scales in microgrids before\cite{urtasun01} but to our knowledge has not yet been considered for large-scale application. Advantages of using grid frequency for communication are low receiver hardware complexity as well as the fact that a single transmitter can cover an entire synchronous area. Though the transmitter has to be very large and powerful, setup of a single large transmitter faces lower bureaucratic hurdles than integration of hundreds of smaller ones into hundreds of local systems each with autonomous goverance. \subsubsection{The frequency dependance of grid frequency} Despite the awesome complexity of large power grids the physics underlying their response to changes in load and generation is surprisingly simple. Individual machines (loads and generators) can be approximated by a small number of differential equations and the entire grid can be modelled by aggregating these approximations into a large system of linear differential equations. Evaluating these systems it has been found that in large power grids small-signal steady-state changes in generation/consumption power balance cause a linear change in frequency\cite{kundur01,entsoe02,entsoe04}. \emph{Small signal} here describes changes in power balance that are small compared to overall grid power. \emph{Steady state} describes changes over a timeframe of multiple cycles as opposed to transient events that only last a few milliseconds. This approximately linear relationship allows the specification of a coefficient linking $\Delta P$ and $\Delta f$ with unit \si{\watt\per\hertz}. In this thesis we are using the European power grid as our model system. We are using data provided by ENTSO-E (formerly UCTE), the governing association of european transmission system operators. In our calculations we use data for the continental european synchronous area, the largest synchronous area. $\frac{\Delta P}{\Delta f}$, called \emph{Overall Network Power Frequency Characteristic} by ENTSO-E is around \SI{25}{\giga\watt\per\hertz}. We can derive general design parameter for any system utilizing grid frequency as a communications channel from the policies of ENTSO-E\cite{entsoe02,entsoe03}. Probably any such system should stay below a modulation amplitude of \SI{100}{\milli\hertz} which is the threshold defined in the ENTSO-E incidents classification scale for a Scale 0-1 (from "Anomaly" to "Noteworthy Incident" scale) frequency degradation incident\cite{entsoe03} in the continental europe synchronous area. \subsubsection{Control systems coupled to grid frequency} The ENTSO-E Operations Handbook Policy 1 chapter defines the activation threshold of primary control to be \SI{20}{\milli\hertz}. Ideally a modulation system would stay well below this threshold to avoid fighting the primary control reserve. Modulation line rate should probably be on the order of a few hundred millibaud. % FIXME is using "probably" here and in the previous paragraph ok? Modulation at such high rates would outpace primary control action which is specified by ENTSO-E as acting within between ``a few seconds'' and \SI{15}{\second}. The effective \emph{Network Power Frequency Characteristic} of primary control in the european grid is reported by ENTSO-E at around \SI{20}{\giga\watt\per\hertz}. Keeping modulation amplitude below this threshold would help to avoid spuriously triggering these control functions. This works out to an upper bound on modulation power of \SI{20}{\mega\watt\per\milli\hertz}. \subsubsection{Practical transmitter implementation} In its most basic form a transmitter for grid frequency modulation would be a very large controllable load connected to the power grid at a suitable vantage point. A spool of wire submerged in a body of cooling water (such as a small lake with a fence around it) along with a thyristor rectifier bank would likely suffice to perform this function during occassional cybersecurity incidents. We can however decrease hardware and maintenance investment even further compared to this rather uncultivated solution by repurposing regular large industrial loads to our transmitter purposes in an emergency situation. For some preliminary exploration we went through a list of energy-intensive industries in Europe\cite{ec01}. The most electricity-intensive industries in this list are primary aluminium and steel production. In primary production raw ore is converted into raw metal for further refinement such as casting, rolling or extrusion. In steelmaking iron is smolten in an electric arc furnace. In aluminium smelting aluminium is electrolytically extracted from alumina. Both processes involve large amounts of electricity with electricity making up \SI{40}{\percent} of production costs. Given these circumstances a steel mill or aluminium smelter would be good candidates as transmitters in a grid frequency modulation system. In aluminium smelting high-voltage mains is transformed, rectified and fed into about 100 series-connected cells forming a \emph{potline}. Inside the pots alumina is dissolved in molten cryolite electrolyte at about \SI{1000}{\degreeCelsius} and electrolysis is performed using a current of tens or hundreds of kiloampere. Resulting pure aluminium settles at the bottom of the cell and is tapped off for further processing. Like steelworks, aluminium smelters are operated night and day without interruption. Aside from metallurgical issues the large thermal mass and enormous heating power requirements do not permit power-cycling. Due to the high costs of production inefficiencies or interruptions the behavior of aluminium smelters under power outages is a fairly well-characterized phenomenon in the industry. The recent move away from nuclear power and to renewable energy has lead to an increase in fluctuations of electricity price throughout the day. These electricity price fluctuations have provided enough economic incentive to aluminium smelters to develop techniques to modulate smelter power consumption without affecting cell lifetime or the output product\cite{duessel01,eisma01}. Power outages of tens of minutes up to two hours reportedly do not cause problems in aluminium potlines and are in fact part of routine operation for purposes such as electrode changes\cite{eisma01,oye01}. The power supply system of an aluminium plant is managed through a highly-integrated control system as keeping all cells of a potline under optimal operating conditions is challenging. Modern power supply systems employ large banks of diodes or SCRs to rectify low-voltage AC to DC to be fed into the potline\cite{ayoub01}. The potline voltage can be controlled almost continuously through a combination of a tap changer and a transductor. The individual cell voltages can be controlled by changing the anode to cathode distance (ACD) by physically lowering or raising the anode. The potline power supply is connected to the high voltage input and to the potline through isolators and breakers. In an aluminium smelter most of the power is sunk into resistive losses and the electrolysis process. As such an aluminium smelter does not have any significant electromechanical inertia compared to the large rotating machines used in other industries. Depending on the capabilities of the rectifier controls high slew rates should be possible, permitting modulation at high\footnote{Aluminium smelter rectifiers are \emph{pulse rectifiers}. This means instead of simply rectifying the incoming three-phase voltage they use a special configuration of transformer secondaries and in some cases additional coils to produce a large number (such as 6) of equally spaced phases. Where a direct-connected three-phase rectifier would draw current in 6 pulses per cycle a pulse rectifier draws current in more, smaller pulses to increase power factor. E.g. a 12-pulse rectifier will draw current in 12 pulses per cycle. In the best case an SCR pulse rectifier switched at zero crossing should allow \SIrange{0}{100}{\percent} load changes from one rectifier pulse to the next, i.e. within a fraction of a single cycle.} data rates. % FIXME validate this \subsubsection with an expert \subsubsection{Avoiding dangerous modes} Modern power systems are complex electromechanical systems. Each component is controlled by several carefully tuned feedback loops to ensure voltage, load and frequency regulation. Multiple components are coupled through transmission lines that themselves exhibit complex dynamic behavior. The overall system is generally stable, but may exhbit some instabilities to particular small-signal stimuli. These instabilities, called \emph{modes} occur when due to mis-tuning of parameters or physical constraints the overall system exhibits oscillation at particular frequencies. \textcite{kundur01} split these into four categories: \begin{description} \item[Local modes] where a single power station oscillates in some parameter \item[Interarea modes] where subsections of the overall grid oscillate w.r.t.\ each other due to weak coupling between them \item[Control modes] caused by imperfectly tuned control systems \item[Torsional modes] that originate from electromechanical oscillations in the generator itself \end{description} The oscillation frequencies associated with each of these modes are usually between a few tens of Millihertz and a few Hertz, see for example \textcite{grebe01} and \textcite{entsoe01}. It is hard to predict the particular modes of a power system at the scale of the central-european interconnected system. Theoretical analysis and simulation may give rough indications but cannot yield conclusive results. Due to the obvious danger as well as high economical impact due to inefficiencies experimental measurements are infeasible. Finally, modes are highly dependent on the power grid's structure and will change with changes in the power grid over time. For all of these reasons, a grid frequency modulation system must be designed very conservatively without relying on the absence (or presence) of modes at particular frequencies. A concrete design guideline that we can derive from this situation is that the frequency spectrum of any grid frequency modulation system should not exhibit any notable peaks and should avoid a concentration of spectral energy in certain frequency ranges. \subsubsection{Overall system parameters} % FIXME \subsubsection{An outline of practical implementation} % FIXME \section{From grid frequency to a reliable communications channel} % FIXME \subsection{Channel properties} % FIXME \subsection{Modulation and its parameters} The sensitivity of the grid to oscillation at particular frequencies described above means we should avoid any modulation technique that would concentrate a lot of energy in a small bandwidth. Taking this principle to its extreme provides us with a useful pointer towards techniques that might work well: Spread-spectrum techniques. By employing spread-spectrum modulation we can produce an almost ideal frequency-domain behavior that spreads the modulation energy almost flat across the modulation bandwidth\cite{goiser01} while at the same time achieving some modulation gain, increasing system sensitivity. This modulation gain spread-spectrum techniques yield potentially allows us to use a weaker stimulus, allowing further reduction of the probability of disturbance to the overall system. Spread-spectrum techniques also inherently allow us to tune the tradeoff between receiver sensitivity and data rate. This tunability is a highly useful parameter to have for the overall system design. Spread spectrum covers a whole family of techniques. \textcite{goiser01} separates these techniques into the coarse categories of \emph{Direct Sequence Spread Spectrum}, \emph{Frequency Hopping Spread Spectrum} and \emph{Time Hopping Spread Spectrum}. \textcite{goiser01} assumes a BPSK or similar modulation underlying the spread-spectrum technique. Our grid frequency modulation channel effectively behaves more like a DC-coupled wire than a traditional radio channel: Any change in excitation will cause a proportional change in the receiver's measurement. Using our fft-based measurement methodology we get a real-valued signed quantity. In this way grid frequency modulation is similar to a channel using coherent modulation. We can transmit not only signal strength, but polarity too. For our purposes we can discount both Time and Frequency Hopping Spread Spectrum techniques. Time hopping aids to reduce interference between multiple transmitters but does not help with SNR any more than Direct Sequence does. % FIXME verify this. Our system is strictly limited to a single transmitter so we do not gain anything through Time Hopping. Frequency Hopping Spread Spectrum techniques require a carrier. Grid frequency modulation itself is very limited in peak frequency deviation $\Delta f$. Frequency hopping could only be implemented as a second modulation on top of GFM, but this would not yield any benefits while increasing system complexity and decreasing data bandwidth. Direct Sequence Spread Spectrum is the only remaining approach for our application. Direct Sequence Spread Spectrum works by directly modulating a long pseudorandom bit sequence onto the channel. The receiver must know the same pseudo-random bit sequence and continuously calculates the correlation between the received signal and the pseudo-random template sequence mapped from binary $[0, 1]$ to bipolar $[1, -1]$. The pseudorandom sequence has approximately equal number of $0$ and $1$ bits the correlation between the sequence and uncorrelated noise is small. The positive contribution of the $+1$ terms of the correlation template approximately cancel out with the $-1$ terms when multiplied with an uncorrelated signal such as white gaussian noise or another pseudo-random sequence. By using a family of pseudo-random sequences with low cross-correlation channel capacity can be increased. Either the transmitter can encode data in the choice of sequence or multiple transmitters can use the same channel at once. The longer the pseudo-random sequence the lower its cross-correlation with noise or other pseudorandom sequences of the same length. Choosing a long sequence we increase modulation gain while decreasing bandwidth. For any given application the sweet spot will be the shortest sequence that is long enough to yield sufficient SNR for subsequent processing layers such as channel coding. A popular code used in many DSSS systems are Gold codes. A set of Gold codes has small cross-correlations. For some value $n$ a set of Gold codes contains $2^n + 1$ sequences of length $2^n - 1$. Gold codes are generated from two different maximum length sequences generated by linear feedback shift registers (LFSRs). For any bit count $n$ there are certain empirically determined preferred pairs of LFSRs that produce Gold codes with especially good cross-correlation. The $2^n + 1$ gold codes are defined as the XOR sum of both LFSR sequences shifted from $0$ to $2^n-1$ bit as well as the two individual LFSR sequences. Given LFSR sequences \texttt{a} and \texttt{b} in numpy notation this is \mintinline{python}{[a, b] + [ a ^ np.roll(b, shift) for shift in len(b) ]}. In DSSS modulation the individual bits of the DSSS sequence are called \emph{chips}. Chip duration determines modulation bandwidth\cite{goiser01}. In our system we are directly modulating DSSS chips on mains frequency without an underlying modulation such as BPSK as it is commonly used in DSSS systems. \subsection{Error-correcting codes} To make our overall system reliable we have to layer some channel coding on top of our DSSS modulation. The messages we expect to transmit are at least a few tens of bits long. We are highly constrained in SNR due to limited transmission power. With lower SNR comes higher BER (bit error rate). Packet error rate grows exponentially with transmission length. For our relatively long transmissions we would realistically get unacceptable error rates. Error correcting codes are a very broad field with many options for specialization. Since we are implementing nothing more than a prototype in this thesis we chose to not expend resources on optimization too much and settled for a comparatively simple low-density parity check code. The state of the art has advanced considerably since the discovery of general LDPC codes. %FIXME cite The main areas of improvement are overhead and decoding speed. Since transmission length % FIXME have we defined this yet? in our system limits system response time but we do not have a fixed target there we can tolerate some degree of sub-optimal overhead. % FIXME get actual pröper numbers on our stuff vs. some state of the art citations. Decoding speed is of no concern to us as our data rate is extremely low. An important concern for our prototype implementation was the availability of reference implementations of our error correcting code. We need a python implementation for test signal generation on a regular computer and we need a small C or C++ implementation that we can adapt to embedded firmware. LDPC codes are a popular textbook example of error-correcting codes and we had no particular difficulty finding either. \subsection{Cryptographic security} Informally the system we are looking for can be modelled as consisting of three parties: The trusted \textsc{Transmitter}, one of a large number of untrusted \textsc{Receivers}, and an \textsc{Attacker}. These three play according to the following rules: \begin{enumerate} \item \textsc{Transmitter} and \textsc{Attacker} can both transmit any bit sequence \item \textsc{Receiver} receives any transmission by either \textsc{Transmitter} or \textsc{Attacker} but cannot distinguish between the two on the signal level \item \textsc{Attacker} knows anything a \textsc{Receiver} might know \item \textsc{Transmitter} is stronger than \textsc{Attacker} and will ``win'' in simultaneous transmission \item Both \textsc{Transmitter} and \textsc{Receiver} can be seeded with some information on each other such as public key fingerprints. \end{enumerate} We are not interested in congestion scenarios where an attacker attempts to disrupt an ongoing transmission by the transmitter. In practice there are several avenues to prevent such attempts including the following. Compromised loads that are being abused by the attacker can be manually disconnected by the utility. Error-correcting codes can be used to provide resiliency against small-scale disturbances. Finally, the transmitter can be designed to have high enough power to be able to override any likely attacker. Our goal is to find a cryptographic primitive that has the following properties: \begin{enumerate} \item \textsc{Transmitter} can produce a transmission bit sequence $\mathbf{s}$ (or equivalently a set of such sequences) that \textsc{Receiver} can uniquely identify as being generated by \textsc{Transmitter}: $\mathcal{R}\left(\mathbf{s}\right) = 1$. Upon reception of this sequence, \textsc{Receiver} performs the safety reset. \item \textsc{Attacker} cannot forge $\mathbf{s}$, that is find $\mathbf{s}'$ such that $\mathbf{s} \neq \mathbf{s}' \land \mathcal{R}\left(\mathbf{s}'\right) = 1$ \item Our system conforms to an at-most-once semantic. That is, upon transmission of a valid bit sequence coded for a particular \textsc{Receiver} or set of receivers each one either performs exactly one safety reset or none at all. We cannot achieve an exactly-once semantic since we are using an unidirectional lossy communication primitive. More coloquially, \textsc{Receiver} might be offline due to a localized power outage and might thus not hear \textsc{Transmitter} even if our broadcast primitive is reliable. The practical impact of this limitation can be mitigated by transmitter simply repeating itself until the desired effect has been achieved. \end{enumerate} An important limitation from the rules of our setup above is that \textsc{Attacker} can always record the bit sequence \textsc{Transmitter} transmits and replay that same sequence later. Before considering any cryptographic approaches we can make the preliminary observation that we can trivially prevent \textsc{Attacker} from violating the at-most-once criterion by simply requiring \textsc{Receiver} to memorize all bit sequences that have been transmitted thus far and only reacting to new bit sequences. This means an attacker might be able to cause offline receivers to reset at a later point, but considering our goal is to reset them in the first place this would not pose a danger to the system. % FIXME elaborate why this is not a threat, and possible mitigations As it seems we need a cryptographic primitive that looks somewhat like a signature. Different from a signature however, we have somewhat relaxed constraints here: While cryptographic signatures need to work over arbitrary inputs, all we want to ``sign'' here is the instruction to perform a safety reset. Since this is the only message we might ever want to transmit, our message space has only one entry and thus the informational content of our message is 0 bit! All the information we want to transmit is already encoded \emph{in the fact that we are transmitting}, and we do not require any further payload to be transmitted. This means we can omit the entirety of the message and just transmit whatever ``signature'' we produce. This is useful since we have to conserve transmission bits so our transmissions do not take exceeedingly long time over our extremely slow communication channel. We could use any of several traditional asymmetric cryptographic primitives to produce these signatures. The comparatively high computational effort required for signature verification would not be an issue. Transmissions take several minutes anyway and we can afford to spend some tens of seconds even in signature verification. Transmission length and by proxy system latency would be determined by the length of the signature. For RSA signature length is the modulus length (i.e. larger than 1000 bit for even basic contemporary security). For elliptic curve-based systems signature size is approximately twice the curve length (i.e. ~300 bit for contemporary security). However, we can do better than this: We can exploit the strange nature of our setting that our effective message entropy is 0 bit to derive a more efficient scheme. \subsubsection{Lamport signatures} In 1979, \textcite{lamport02} introduced a signature scheme that is based only on a one-way function such as a cryptographic hash function. The basic observation is that by choosing a random secret input to a one-way function and publishing the output, one can later prove knowledge of the input by simply publishing it. In the following paragraphs we will describe a construction of a one-time signature scheme based on this observation. The scheme we describe is the one usually called a ``Lamport Signature'' in modern literature and is slightly different from the variant described in the 1979 paper, but for our purposes we can consider both to be equivalent. \paragraph{Setup.} In a Lamport signature, for an n-bit hash function $H$ the signer generates a private key $s = \left(s_{b, i} | b\in\left\{0, 1\right\}, 0\le i] (input.south) -- (safety.north); \draw[-] (safety.south) -- (safety-anchor); \draw[->] (safety-anchor) -| (powersupply.north); \draw[->] (safety-anchor) -| (analog.north); \draw[->] (powersupply.south) |- (adc.west); \draw[->] (powersupply.south) |- (micro.west); \draw[->] (analog.south) -- (adc.north); \draw[->] (adc.south) -- (micro.north); \draw[->] (micro.south) -- (isol.north); \draw[->] (isol.south) -- (usb.north); \draw[dashed] (isol.west) -- (isol-left.east); \draw[dashed] (isol.east) -- (isol-right.west); \end{tikzpicture} \end{center} \caption{Frequency sensor hardware diagram} \label{fmeas-sens-diag} \end{figure} An overall block diagram of our system is shown in fig. \ref{fmeas-sens-diag}. The mircrocontroller we chose is an \texttt{STM32F030F4P6} ARM Cortex-M0 microcontroller made by ST Microelectronics. The ADC in fig. \ref{fmeas-sens-diag} in our design is the integrated 12-bit ADC of this microcontroller, which is sufficient for our purposes. The USB interface is a simple USB to serial converter IC (\texttt{CH340G}) and the galvanic digital isolation is accomplished with a pair of high-speed optocouplers on its \texttt{RX} and \texttt{TX} lines. The analog signal processing is a simple voltage divider using high-power resistors to get the required creepage along with some high-frequency filter capacitors and an op-amp buffer. The power supply is an off-the-shelf mains-input power module. The system is implemented on a single two-layer PCB that is housed in an off-the-shelf industrial plastic case fitted with a printed label and a few status lights on its front. \subsection{Clock accuracy considerations} Our measurement hardware will sample line voltage at some sampling rate $f_S$, e.g.\ \SI{1}{\kilo\hertz}. All downstream processsing is limited in accuracy by the accuracy of $f_S$\footnote{ We are not considering the effects of clock jitter. We are highly oversampling the signal and the FFT done in our downstream processing will eliminate small jitter effects leaving only frequency stability to worry about. }. We generate our sampling clock in hardware by clocking the ADC from one of the microcontroller's timer blocks clocked from the microcontroller's system clock. This means our ADC's sampling window will be synchronized cycle-accurate to the microcontroller's system clock. Our downstream measurement of mains frequency by nature is relative to our sampling frequency $f_S$. In the setup described above this means we have to make sure our system clock is fairly stable. A frequency derivation of \SI{1}{ppm} in our system clock causes a proportional grid frequency measurement error of $\Delta f = f_\text{nom} \cdot 10^{-6} = \SI{50}{\micro\hertz}$. In a worst-case where our system is clocked from a particularly bad crystal that exhibits \SI{100}{ppm} of instabilities over our measurement period we end up with an error of \SI{5}{\milli\hertz}. This is well within our target measurement range, so we need a more stable clock source. Ideally we want to avoid writing our own clock conditioning code where we try to change an oscillators operating frequency to match some reference. Clock conditioning algorithms are highly complex and in our case post-processing of measurement data and simply adding and offset is simpler and less error-prone. Our solution to these problems is to use a crystal oven\footnote{ A crystal oven is a crystal oscillator thermally coupled closely to a heater and temperature sensor and enclosed in a thermally isolated case. The heater is controlled to hold the crystal oscillator at a near-constant temperature some few ten degrees above ambient. Any ambient temperature variations will be absorbed by the temperature control. This yields a crystal frequency that is almost completely unaffected by ambient temperature variations below the oven temperature and whose main remaining instability is aging. }as our main system clock source. Crystal ovens are expensive compared to ordinary crystal oscillators. Since any crystal oven will be much more accurate than a standard room-temperature crystal we chose to reduce cost by using one recycled from old telecommunications equipment. To verify clock accuracy we routed an externally accessible SMA connector to a microcontroller pin that is routed to one of the microcontroller's timer inputs. By connecting a GPS 1pps signal to this pin and measuring its period we can calculate our system's Allan variance\footnote{ Allan variance is a measure of frequency stability between two clocks. }, thereby measuring both clock stability and clock accuracy. We ran a 4 hour test of our frequency sensor that generated the histogram shown in figure \ref{ocxo_freq_stability}. These results show that while we get a systematic error of about \SI{10}{ppm} due to manufacturing tolerances the random error at less than \SI{10}{ppb} is smaller than that of a room-temperature crystal oscillator by 3-4 orders of magnitude. Since we are interested in grid frequency variations over time but not in the absolute value of grid frequency the systematic error is of no consequence to us. The random error at \SI{3.66}{ppb} corresponds to a frequency measurement error of about \SI{0.2}{\micro\hertz}, well below what we can achieve at reasonable sampling rates and ADC resolution. \begin{figure} \centering \includegraphics{../lab-windows/fig_out/ocxo_freq_stability} \caption{OCXO Frequency derivation from nominal \SI{19.440}{\mega\hertz} measured against GPS 1pps} \label{ocxo_freq_stability} \end{figure} \subsection{Firmware implementation} The firmware uses one of the microcontroller's timers clocked from an external crystal oscillator to produce an \SI{1}{\milli\second} tick that the internal ADC is triggered from for a sample rate of \SI{1}{\kilo sps}. Higher sample rates would be possible but reliable data transmission over the opto-isolated serial interface might prove challenging and \SI{1}{\kilo sps} corresponds to $20$ samples per cycle at $f_\text{nominal}$. This is $10\times$ nyquist and should be plenty for accurate measurements. The ADC measurements are read using DMA and written into a circular buffer. Using some DMA controller features this circular buffer is split in back and front halves with one being written to and the other being read at the same time. Buffer contents are moved from the ADC DMA buffer into a packet-based reliable UART interface as they come in. The UART packet interface keeps two ringbuffers: One byte-based ringbuffer for transmission data and one ringbuffer pointer structure that keeps track of ADC data packet boundaries in the byte-based ringbuffer. Every time a chunk of data is available from the ADC the data is framed into the byte-based ringbuffer and the packet boundaries are logged in the packet pointer ringbuffer. If the UART transmitter is idle at this time a DMA-backed transmission of the oldest packet in the packet ringbuffer is triggered at this point. Data is framed using Consistent Overhead Byte Stuffing (COBS)\footnote{ COBS is a framing technique that allows encoding $n$ bytes of arbitray data into exactly $n+1$ bytes with no embedded $0$-bytes that can then be delimited using $0$-bytes. COBS is simple to implement and allows both one-pass decoding and encoding. The encoder either needs to be able to read up to \SI{256}{\byte} ahead or needs a buffer of \SI{256}{\byte}. COBS is very robust in that it allows self-synchronization. At any point a receiver can reliably synchronize itself against a COBS data stream by waiting for the next $0$-byte. The constant overhead allows precise bandwidth and buffer planning and provides constant, good efficiency close to the theoretical maximum.}\cite{cheshire01} along with a CRC-32 checksum for error checking. When the host receives a new packet with a valid checksum it returns an acknowledgement packet to the sensor. When the sensor receives the acknowledgement, the acknowledged packet is dropped from the transmission packet ringbuffer. When the host detects an incorrect checksum it simply stays quiet and waits for the sensor to resume with retransmission when the next ADC buffer has been received. The serial interface logic presents most of the complexity of the sensor firmware. This complexity is necessary since we need reliable, error-checked transmission to the host. Though rare, bit errors on a serial interface do happen and data corruption is unacceptable. The packet-layer queueing on the sensor is necessary since the host is not a realtime system and unpredictable latency spikes of several hundred milliseconds are possible. The host in our recording setup is a Raspberry Pi 3 model B running a Python script. The Python script handles serial communication and logs data and errors into an SQLite database file. SQLite has been chosen for its simple yet flexible interface and its good tolerance of system resets due to unexpected power loss. Overall our setup performed adequately with IO contention on the raspberry PI/linux side causing only 16 skipped sample packets over a 68-hour recording span. \subsection{Frequency sensor measurement results} \begin{figure} \centering \includegraphics{../lab-windows/fig_out/freq_meas_trace_24h} \caption{Trace of grid frequency over a 24 hour window. One clearly visible feature are large positive and negative transients at full hours. Times shown are UTC. Note that the european continental synchronous area that this sensor is placed in covers several time zones which may result in images of daily load peaks appearing in 1 hour intervals. Fig.\ \ref{freq_meas_trace_mag} contains two magnified intervals from this plot.} \label{freq_meas_trace} \end{figure} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics{../lab-windows/fig_out/freq_meas_trace_2h_1} \caption{A 2 hour window around 00:00 UTC.} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics{../lab-windows/fig_out/freq_meas_trace_2h_2} \caption{A 2 hour window around 18:30 UTC.} \end{subfigure} \caption{Two magnified 2 hour windows of the trace from fig.\ \ref{freq_meas_trace}.} \label{freq_meas_trace_mag} \end{figure} \begin{figure} \centering \includegraphics{../lab-windows/fig_out/mains_voltage_spectrum} \caption{Power spectral density of the mains voltage trace in fig. \ref{freq_meas_trace}. We can see the expected peak at \SI{50}{\hertz} along with smaller peaks at odd harmonics. We can also see a number of spurious tones both between harmonics and at low frequencies, as well as some bands containing high noise energy around \SI{0.1}{\hertz}. This graph demonstrates a high signal-to-noise ratio that is not very demanding on our frequency estimation algorithm. } \label{mains_voltage_spectrum} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/freq_meas_spectrum} \caption{Power spectral density of the 24 hour grid frequency trace in fig. \ref{freq_meas_trace} with some notable peaks annotated with the corresponding period in seconds. The $\frac{1}{f}$ line indicates a pink noise spectrum. Around a period of \SI{20}{\second} the PSD starts to fall off at about $\frac{1}{f^3}$ until we can make out some bumps at periods around $2$ and \SI{3}{\second}. Starting at at around \SI{1}{Hz} we can see a white noise floor in the order of \si{\micro\hertz^2\per\hertz}. % TODO: where does this noise floor come from? Is it a fundamental property of the grid? Is it due to limitations of % our measurement setup (such as ocxo stability/phase noise) ??? } \label{freq_meas_spectrum} \end{figure} Captured raw waveform data is processed in the Jupyter Lab environment\cite{kluyver01} and grid frequency estimates are extracted as described in sec. \ref{frequency_estimation} using the \textcite{gasior01} technique. Appendix \ref{grid_freq_estimation_notebook} contains the Jupyter notebook we used for frequency measurement. In fig.\ \ref{freq_meas_feedback} we fed back to the frequency estimator its own output giving us an indication of its numerical performance. The result was \SI{1.3}{\milli\hertz} of RMS noise over a \SI{3600}{\second} simulation time. This indicates performance is good enough for our purposes. In addition to this we validated our algorithm's performance by applying it to the test waveforms from \textcite{wright01}. In this test we got errors of \SI{4.4}{\milli\hertz} for the \emph{noise} test waveform, \SI{0.027}{\milli\hertz} for the \emph{interharmonics} test waveform and \SI{46}{\milli\hertz} for the \emph{amplitude and phase step} test waveform. Full results can be found in fig.\ \ref{freq_meas_rocof_reference}. \begin{figure} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/freq_meas_feedback} \caption{ The frequency estimation algorithm applied to a synthetic noise-less mains waveform generated from its own output. This feedback simulation gives an indication of numerical errors in our estimation algorithm. The top four graphs show a comparison of the original trace (blue) and the re-calculated trace (orange). The bottom trace shows the difference between the two. As we can tell both traces agree very well with an overall RMS deviation of about \SI{1.3}{\milli\hertz}. The bottom trace shows deviation growing over time. This is very likely an effect of numerical errors in our ad-hoc waveform generator. } \label{freq_meas_feedback} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/freq_meas_rocof_reference} \caption{ Performance of our frequency estimation algorithm against the test suite specified in \textcite{wright01}. Shown are standard deviation and variance measurements as well as time-domain traces of differences. } \label{freq_meas_rocof_reference} \end{figure} \section{Channel simulation and parameter validation} To validate all layers of our communication stack from modulation scheme to cryptography we built a prototype implementation in python. Implementing all components in a high-level language builds up familiartiy with the concepts while taking away much of the implementation complexity. For our demonstrator we will not be able to use python since our target platform is a cheap low-end microcontroller. Our demonstrator firmware will have to be written in a low-level language such as C or rust. For prototyping these languages lack flexibility compared to python. % FIXME introduce project outline, specs -> proto -> demo above! To validate our modulation scheme we first performed a series of simulations on our python demodulator prototype implementation. To simulate a modulated grid frequency signal we added noise to a synthetic modulation signal. For most simulations we used measured frequency data gathered with our frequency sensor. We only have a limited amount of capture data. Re-using segements of this data as background noise in multiple simulation runs could hypothetically lead to our simulation results depending on individual features of this particular capture that would be common between all runs. To estimate the impact of this problem we re-ran some of our simulations with artificial random noise synthesized with a power spectral density matching that of our capture. To do this, we first measured our capture's PSD, then fitted a low-resolution spline to the PSD curve in log-log coordinates. We then generated white noise, multiplied the resampled spline with the DFT of the synthetic noise and performed an iDFT on the result. The resulting time-domain signal is our synthetic grid frequency data. Fig.\ \ref{freq_meas_spectrum} shows the PSD of our measured grid frequency signal. The red line indicates the low-resolution log-log spline interpolation used for shaping our artificial noise. Fig.\ \ref{simulated_noise_spectrum} shows the PSD of our simulated signal overlayed with the same spline as a red line and shows time-domain traces of both simulated (blue) and reference signals (orange) at various time scales. Visually both signals look very similar, suggesting we have found a good synthetic approximation of our measurements. \begin{figure} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/simulated_noise_spectrum} \caption{Synthetic grid frequency in comparison with measured data. The topmost graph shows the synthetic spectrum compared to the spline approximation of the measured spectrum (red line). The other graphs show time-domain synthetic data (blue) in comparison with measured data (orange). } \label{simulated_noise_spectrum} \end{figure} In our simulations, we manipulated four main variables of our modulation scheme and demodulation algorithm and observed their impact on symbol error rate (SER): \begin{description} \item[Modulation amplitude.] Higher amplitude should correspond to a lower SER. \item[Modulation bit count.] Higher bit count $n$ means longer transmissions but yields higher theoretical decoding gain, and should increase demodulator sensitivity. Ultimately, we want to find a sweet spot of manageable transmission length at good demodulator sensitivity. \item[Decimation] or DSSS chip duration. The chip time determines where in the grid frequency spectrum (fig.\ \ref{freq_meas_spectrum} our modulated signal is located. Given our noise spectrum (fig.\ \ref{freq_meas_spectrum}) lower chip durations (shifting our signal upwards in the spectrum) should yield lower in-band background noise which should correspond to lower symbol error rates. \item[Demodulation correlator peak threshold factor.] The first step of our prototype demodulation algorithm is to calculate the correlation between all $2^n+1$ Gold sequences % FIXME add a \ref here, describe proto demod alg somewhere and to identify peaks corresponding to the input data containing a correctly aligned Gold sequence. The threshold factor is a factor peaks of what magnitude compared to baseline noise levels are considered in the following maximum likelihood estimation (MLE) decoding. % FIXME do we actually do MLS? \end{description} As indicated by our results, symbol error rate is a good proxy of demodulation performance. With decreasing signal-to-noise ratio, margins in various parts of the demodulator decrease which statistically leads to an increased symbol error rate. Our simulations yield smooth, reproducible SER curves with adequately low error bounds. This indicates SER is related fairly monotonically to the signal-to-noise margins inside our demodulator prototype. \begin{figure} \centering \includegraphics{../lab-windows/fig_out/dsss_gold_nbits_overview} \caption{ Symbol Error Rate (SER) as a function of transmission amplitude. The line indicates the mean of several measurements for each parameter set. The shaded areas indicate one standard deviation from the mean. Background noise for each trial is a random segment of measured grid frequency. Background noise amplitude is the same for all trials. Shown are four traces for four different DSSS sequence lengths. Using a 5-bit gold code, one DSSS symbol measures 31 chips. 6 bit per symbol are 63 chips, 7 bit are 127 chips and 8 bit 255 chips. This simulation uses a decimation of 10, which corresponds to an $1 \text{s}$ chip length at our $10 \text{Hz}$ grid frequency sampling rate. At 5 bit per symbol, one symbol takes $31 \text{s}$ and one bit takes $6.2 \text{s}$ amortized. At 8 bit one symbol takes $255 \text{s} = 4 \text{min} 15 \text{s}$ and one bit takes $31.9 \text{s}$ amortized. Here, slower transmission speed buys coding gain. All else being the same this allows for a decrease in transmission power. } \label{dsss_gold_nbits_overview} \end{figure} \begin{figure} \centering \includegraphics{../lab-windows/fig_out/dsss_gold_nbits_sensitivity} \caption{ Amplitude at a SER of 0.5\ in mHz depending on symbol length. Here we can observe an increase of sensitivity with increasing symbol length, but we can clearly see diminishing returns above 6 bit (63 chips). Considering that each bit roughly doubles overall transmission time for a given data length it seems lower bit counts are preferrable if the necessary transmitter power can be realized. } \label{dsss_gold_nbits_sensitivity} \end{figure} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics{../lab-windows/fig_out/dsss_thf_amplitude_5678} \label{dsss_thf_amplitude_5678} \caption{ \footnotesize SER vs.\ amplitude graph similar to fig.\ \ref{dsss_gold_nbits_overview} with dependence on threshold factor color-coded. Each graph shows traces for a single DSSS symbol length. } \end{subfigure} \end{figure} \begin{figure} \ContinuedFloat \begin{subfigure}{\textwidth} \centering \includegraphics{../lab-windows/fig_out/dsss_thf_sensitivity_5678} \label{dsss_thf_sensitivity_5678} \caption{ \footnotesize Graphs of amplitude at $SER=0.5$ for each symbol length as well as asymptotic SER for large amplitudes. Areas shaded red indicate that $SER=0.5$ was not reached for any amplitude in the simulated range. We can observe that smaller symbol lengths favor lower threshold factors, and that optimal threshold factors for all symbol lengths are between $4.0$ and $5.0$. } \end{subfigure} \caption{ Dependence of demodulator sensitivity on the threshold factor used for correlation peak detection in our DSSS demodulator. This is an empirically-determined parameter specific to our demodulation algorithm. At low threshold factors our classifier yields lots of spurious peaks that have to be thrown out by our maximum likelihood estimator. These spurious peaks have a random time distribution and thus do not pose much of a challenge to our MLE but at very low threshold factors the number of spurious peaks slows down decoding and does still clog our MLE's internal size-limited candidate lists which leads to failed decodings. At very high threshold factors decoding performance suffers greatly since many valid correlation peaks get incorrectly ignored. The glitches at medium threshold factors in the 7- and 8-bit graphs are artifacts of our prototype decoding algorithm that we have not fixed in the prototype implementation since we wanted to focus on the final C version.} \label{dsss_thf_sensitivity} \end{figure} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/chip_duration_sensitivity_5} \label{chip_duration_sensitivity_5} \caption{ 5 bit Gold code } \end{subfigure} \end{figure} \begin{figure} \ContinuedFloat \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/chip_duration_sensitivity_6} \label{chip_duration_sensitivity_6} \caption{ 6 bit Gold code } \end{subfigure} \caption{ Dependence of demodulator sensitivity on DSSS chip duration. Due to computational constraints this simulation is limited to 5 bit and 6 bit DSSS sequences. There is a clearly visible sensitivity maximum at fairly short chip lengths around $0.2 \text{s}$. Short chip durations shift the entire transmission band up in frequency. In fig.\ \ref{freq_meas_spectrum} we can see that noise energy is mostly concentrated at lower frequencies, so shifting our signal up in frequency will reduce the amount of noise the decoder sees behind the correlator by shifting the band of interest into a lower-noise spectral region. For a practical implementation chip duration is limited by physical factors such as the maximum modulation slew rate ($\frac{\text{d}P}{\text{d}t}$), the maximum Rate-Of-Change-Of-Frequency (ROCOF, $\frac{\text{d}f}{\text{d}t}$) the grid can tolerate and possible inertial effects limiting response of frequency to load changes at certain load levels. % FIXME are these inertial effects likely? Ask an expert. } \label{chip_duration_sensitivity} \end{figure} \begin{figure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/chip_duration_sensitivity_cmp_meas_6} \label{chip_duration_sensitivity_cmp_meas_6} \caption{ Simulation using baseline frequency data from actual measurements. } \end{subfigure} \end{figure} \begin{figure} \ContinuedFloat \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{../lab-windows/fig_out/chip_duration_sensitivity_cmp_synth_6} \label{chip_duration_sensitivity_cmp_synth_6} \caption{ Simulation using synthetic frequency data. } \end{subfigure} \caption{ Chip duration/sensitivity simulation results like in fig.\ \ref{chip_duration_sensitivity} compared between a simulation using measured frequency data like previous graphs and one using artificially generated noise. There is almost no visible difference indicating that we have found a good model of reality in our noise synthesizer, but also that real grid frequency behaves like a frequency-shaped gaussian noise process. } \label{chip_duration_sensitivity_cmp} \end{figure} \section{Implementation of a demonstrator unit} %FIXME \section{Experimental results} %FIXME \section{Lessons learned} %FIXME \chapter{Future work} \section{Technical standardization} The description of a safety reset system provided in this work could be translated into a formalized technical standard with relatively low effort. Our system is very simple compared to e.g. a full smart meter communication standard and thus can conceivably be described in a single, concise document. The much more complicated side of standardization would be the standardization of the backend operation including key management, coordination and command authorization. \section{Regulatory adoption} Since the proposed system adds significant cost and development overhead at no immediate benefit to either consumer or utility company it is unlikely that it would be adopted voluntarily. Market forces limit what long-term planning utility companies can do. An advanced mitigation such as this one might be out of their reach on their own and might require regulatory intervention to be implemented. To regulatory authorities a system such as this one provides a powerful primitive to guard against attacks. Due to the low-level approach our system might allow a regulatory authority to restore meters to a safe state without the need of fine-grained control of implementation details such as application network protocols. A regulatory authority might specify that all smart meters must use a standardized reset controller that on command resets to a minimal firmware image that disables external communication, continues basic billing functions and enables any disconnect switches. This system would enable the \emph{reset authority} to directly preempt a large-scale attack irrespective of implementation details of the various smart meter implementations. Cryptographic key management for the smart reset system is not much different to the management of highly privileged signing keys as they are used in many other systems already. If the safety reset system is implemented with a regulatory authority as the \emph{reset authority} they would likely be able to find a public entity that is already managing root keys for other government systems to also manage safety reset keys. Availability and security requirements of safety reset keys do not differ significantly from those for other types of root keys. \section{Practical implementation} %FIXME \section{Zones of trust} In our design, we opted for a safety reset controller % FIXME is "safety reset" the proper name here? We need some sort of branding, but is this here really about "safety"? in form of a separate micocontroller entirely separate from whatever application microcontroller the smart meter design is already using. This design nicely separates the meter into an untrusted application (the core microcontroller) and the trusted reset controller. Since the interface between the two is simple and logically one-way, it can be validated to a high standard of security. Despite these security benefits, the cost of such a separate hardware device might prove high in a mass-market rollout. In this case, one might attempt to integrate the reset controller into the core microcontroller in some way. Primarily, there would be two ways to accomplish this. One is a solution that physically integrates an additional microcontroller core into the main application microcontroller package either as a submodule on the same die or as a separate die in a multi-chip module (MCM) with the main application microcontroller. A full-custom solution integrating both on a single die might be a viable path for very large-scale deployments, but will most likely be too expensive in tooling costs alone to justify its use. More likely for a medium- to large-scale deployment (millions of meters) would be a MCM integrating an off-the-shelf smart metering microcontroller die with the reset controller running on another, much smaller off-the-shelf microcontroller die. This solution might potentially save some cost compared to a solution using a discrete microcontroller for the reset controller. The more likely approach to reducing cost overhead of the reset controller would be to employ virtualization technologies such as ARM's TrustZone in order to incorporate the reset controller firmware into the application firmware on the same chip without compromising the reset controller's security or disturbing the application firmware's operation. TrustZone is a virtualization technology that provides a hardware-assisted privileged execution domain on at least one of the microcontrollers cores. In traditional virtualization setups a privileged hypervisor is managing several unprivileged applications sharing resources between them. Separation between applications in this setup is longitudinal between adjacent virtual machines. Two applications would both be running in unprivileged mode sharing the same cpu and the hypervisor would merely schedule them, configure hardware resource access and coordinate communication. This longitudinal virtualization simplifies application development since from the application's perspective the virtual machine looks very similar to a physical one. In addition, in general this setup reciprocally isolates two applications with neither one being able to gain control over the other. In contrast to this, a TrustZone-like system in general does not provide several application virtual machines and longitudinal separation. Instead, it provides lateral separation between two domains: The unprivileged application firmware and a privileged hypervisor. Application firmware may communicate with the hypervisor through defined interfaces but due to TrustZone's design it need not even be aware of the hypervisor's existence. This makes a perfect fit for our reset controller. The reset controller firmware would be running in privileged mode and without exposing any communication interfaces to application firmware. The application firmware would be running in unprivileged mode without any modification. The main hurdles to the implementation to a system like this are the requirement for a microcontroller providing this type of virtualization on the one hand and the complexity of correctly employing this virtualization on the other hand. Virtualization systems such as TrustZone are still orders of magnitude more complex to correctly configure than it is to simply use separate hardware and secure the interfaces in between. \chapter{Alternative use of grid frequency modulation} % FIXME random beacons? funky consensus protocols? proof of knowledge/cryptographic notary service? \chapter{Conclusion} %FIXME \newpage \appendix \chapter{Acknowledgements} %FIXME \newpage \chapter{References} \nocite{*} % FIXME \printbibliography \newpage \chapter{Transcripts of Jupyter notebooks used in this thesis} %\includenotebook{Grid frequency estimation}{grid_freq_estimation} %\includenotebook{Grid frequency estimation validation against ROCOF test suite}{freq_meas_validation_rocof_testsuite} %\includenotebook{Frequency sensor clock stability analysis}{gps_clock_jitter_analysis} %\includenotebook{DSSS modulation experiments}{dsss_experiments-ber} \chapter{Demonstrator schematics and code} \chapter{Economic viability of countermeasures} \section{Attack cost} \section{Countermeasure cost} % FIXME maybe include a standard for the technical side of a safety reset system here, e.g. in the style of an IETF draft? \end{document}