**The time synchronization over 1588 has been established as an IEEE standard since 2008 and is already used in various areas. Until now, the use of this standard has always been associated with exotic hardware, i.e. the implementation of network adapters in various FPGAs or embedded controllers. With the introduction of the Intel network chip families Intel I21x and Intel I35x, this standard is now available for the consumer market. This lays the foundation for new projects based on consumer hardware.**

A number of commercially available IEEE 1588 implementations are available for software implementation. The American LXI Consortium even provides a virtually free implementation of the standard (membership fees only). The German company TSEP developed this IEEE 1588 implementation in cooperation with the LXI consortium. TSEP also distributes a paid version especially for companies that are not interested in LXI membership.

The basic question for every IEEE 1588 project is with what accuracy the time synchronization must be done. The achievable accuracy usually depends on the hardware used, the topology and the control algorithm used. Modern IEEE 1588 implementations have the possibility to define different control algorithms and to exchange them easily. TSEP has also defined the control algorithm as an independent module with defined interfaces. Thus, the user can easily define his own algorithm and integrate and test it in the system.

If, for example, IEEE 1588 is used for the synchronization of WLAN loudspeakers, the human ear is the unit of measurement for accuracy. The human auditory system can detect delay differences of 10 µs or more. Therefore, the achieved accuracy for the synchronization of the WLAN loudspeakers must be less than 10 µs.

However, other accuracies are required for measurement tasks. Within measurement technology, measurements are usually triggered by triggers. As a rule, these triggers are signal changes (rising or falling edges, exceeding level values, etc.). These signals are transmitted via cable from the source to the encoder. Thus, the runtimes within the trigger cable are the decisive target value for the accuracy. If one assumes cable lengths of approx. 5 meters, which is rather generously dimensioned, one can assume a running time of 25 ns (running time of 5 ns per meter). Thus, the accuracy of metrological problems could be classified in this order of magnitude. However, with the introduction of 5G technologies in mobile radio, this order of magnitude has shifted significantly downwards in the field of measurement technology. For these technologies, accuracies in the subnano range would be desirable.

IEEE 1588 tries to synchronize several free running clocks. Each of these clocks is usually implemented as a counter that increments its counter with a given frequency. Based on the frequency and the counter reading, the current time can now be derived at any time. Since it is technically not possible to have identical frequencies generated by several oscillators, the frequency must be readjusted. Since it is technically much easier to manipulate the counter cycle, it is changed. This adjustment must be made using a control algorithm, since the adjustments are subject to various interferences. In addition, however, disturbances that may occur during transport must also be taken into account. Since each IEEE 1588 implementation is based on its own hardware and hardware topology, “the general control algorithm” cannot exist.

In principle, the algorithms can be divided into two groups. The first group is rather based on simple algorithms, which usually concentrate only on determining the error in the frequency of the own clock from the determined time difference between master and slave (also called MeanPathDelay). This type of algorithm is independent of the hardware topology used and provides useful results. The free LinuxPTP implementation and the TSEP implementation each contain such an algorithm by default.

The second group are algorithms that try to identify the errors in the system and include them in the calculation of the error of their own frequency. These algorithms only make sense if the hardware used and the topology to be expected are known. The error models can then be created and used on the basis of the hardware used. Kalman filters are particularly suitable for this type of control algorithm, as they can be modeled specifically for the problem in question.

Each control algorithm contains at least two states. In the first state, the offset between master and slave (MeanPathDelay) is so large that the algorithm cannot close this gap in an acceptable control time. In this state, the time received from the master is taken directly as its own slave time without correction, in the hope that the MeanPathDelay value determined in the next synchronization interval is significantly smaller. This state is maintained until an acceptable MeanPathDelay is reached. This state is the default state after starting the clock or if synchronization is lost due to problems. In the second state, the actual control algorithm then takes effect, which tries to determine the correction values of its own clock and approximate its own time as closely as possible to the master clock time.

As already mentioned, these types of algorithms are designed to determine only the correction value of your own watch based on the MeanPathDelay determined. An error model of one’s own does not come to bear here, or only sparsely. This type of algorithm is relatively simple and can be used independently of the hardware.

Based on the standard control algorithm of the TSEP IEEE 1588 implementation, the mode of operation of such an algorithm shall be illustrated. In the first step, this simple control algorithm tries not to include invalid or wrong MeanPathDelays in the control. The IEEE 1588 implementation discussed here is based on Intel network chips of the I21x family. They use the Gigabit Ethernet according to IEEE standard 802.3. This type of network system is not deterministic, every participant can access the network at any time. Access is organized via packet collisions. This can result in packets being transmitted much later than is actually assumed. This delay does not appear in the transmitted data packets. In order to protect the control algorithms from these false and thus interfering data, the control algorithm tries to detect these false data packets and not to include them in the control algorithm.

*Figure 1: Wrong MeanPathDelay due to network delays*

The diagram above shows such a wrong packet, which was subsequently included in the control algorithm and then had to be compensated. These false data can be recognized by the significantly increased MeanPathDelay. To detect these false MeanPathDelays, the standard deviation of the MeanPathDelay is calculated:

*Figure 2: Calculation of the standard softening MeanPathDelay*

If a new MeanPathDelay clearly exceeds the calculated standard deviation, it will not be used for further processing.

In the next step, an attempt is made to calculate the correction of your own clock from the calculated MeanPathDelay. For this the deviation of the slave from the master, which was determined via the MeanpathDelay algorithm, is shifted to the frequency of the own counter (clock). In the first step, the error of the own clock is calculated for each counter step:

Figure 3: Calculation of the internal clock error

The Intel I21x can vary the time after which its own counter is incremented (every 8 ns) within certain limits. The error per counter increment is programmed into the hardware by the algorithm (according to the above formula).

Such simple control algorithms always cause the system to oscillate, since the control algorithm uses the correction value accordingly depending on the error determined. To avoid such an oscillation, the TSEP IEEE 1588 control algorithm considers the first derivative of the MeanPathDelay.

Figure 4: Advanced control algorithm

If, however, error models and hardware topologies are to be dealt with, other approaches must be used. Kalman filters (or Kalman Bucy Stratonovich filters) can be used for such problems. The Kalman filter is named after its discoverers Rudolf E. Kálmán, Richard S. Bucy and Ruslan L. Stratonovich, who discovered the method independently or made significant contributions to it. The Kalman filter is used to reduce errors in real measured values and to provide estimates for non-measurable system variables. The prerequisite for this is that the necessary values can be described by a mathematical model. The special feature of the filter [3] presented by Kálmán in 1960 is its special mathematical structure, which enables its use in real-time systems in various technical areas. This includes the use in electronic control loops in communication systems. Within the framework of mathematical estimation theory one also speaks of a Bavarian minimum variance estimator for linear stochastic systems in state space representation.

The Kalman filter tries to include error models in the estimation of the actual correction value. In the first approach it is necessary to estimate the error sources within an IEEE 1588 realization.

An error size in an IEEE 1588 implementation is the stability of the internal counter used. The current time is derived from an internal counter. The clock rate used to increment the counter is derived from an oscillator or other source. For the Intel I21x and I35x network chips, this is derived from the Ethernet clock used, i.e. 125 MHz.

Figure 5: Measurement setup with Omicron Grandmaster Clock

In order to get an overview of the stability of this clock, measurements were made in the TSEP laboratory (see picture above). Several computer boards and PC plug-in cards with the corresponding network chips were measured. For this purpose, the internal clock on the network chips was operated with a constant adjustment over several hours. A pps signal (pulse per second) was programmed on the GPIO of the Intel network chip and measured with a corresponding oscilloscope. An Omicron Grandmaster Clock was used as the Grandmaster Clock.

*Figure 6: Error distribution room temperature*

The diagram above shows the difference of the period duration of the pps signal. In this diagram it can be seen that the error of the clock is grouped around the middle position. The measurement shows that the error moves at about +/- 2000 ns. Based on the 125 MHz clock rate of the clock, there is one error per clock rate/increment of the clock from:

**2000 ns / 125 MHz = 16 x 10-15 sec**

This error size could also be measured with other Intel 21x network cards or embedded computers with Intel I21x.

Another point is the temperature stability of the internal clock of the Intel I21x chips. For this purpose, the ambient temperature of the network chip was varied. The following two diagrams show the error at the corner temperatures for the chip.

Figure 7: Error distribution of cooled network chip

Figure 8: Error distribution of heated network chip

From the diagrams one can see that the error distribution is clearly different. With increased temperature, the error also occurs noticeably more frequently.

In order to better understand the following problems, some basic information about Gigabit Ethernet according to IEEE standard 802.3 and the Intel network chips must be discussed. The 1 GB Ethernet originated from the previously established 100 MB Ethernet standard. The Ethernet CAT 5 cables used were intended for the transmission of signals at 125 MHz. These cables had four pairs (two wires each). However, only two pairs were used to transmit data. With Gigabit Ethernet, two bits are now transmitted over each of the four pairs:

**125 MHz x 2 bits x 4 communication channels = 1000 Mbps = 1 Gbps**

So two bits are always transferred to the four twisted pairs. The transmitter has to divide each byte into four parts and the receiver has to bring them together again. The processing rate is 125 MHz.

The four wire pairs are used simultaneously for both directions. The frequency used within the Gigabit Ethernet is 125 MHz according to GMII (Gigabit Media Independent Interface). The frequencies at the transmitter and receiver are not necessarily coupled. Depending on the data transmission, the frequency for the data is derived from its own clock or from that of the network. Due to this procedure and the probable assumption that the four lines are not identical in length, a different runtime must be calculated for the individual partial data (4 x 2 bits). Only after all four data parts of a byte are available can they be reassembled, which can lead to delays. Depending on the scenario, delays of up to 20 ns can occur here. The whole problem is described very well in the document “Improving IEEE 1588 synchronization accuracy in 1000BASE-T systems” [1]. Since the problems shown unfortunately lie in the ns range, they considerably influence the accuracy of time synchronization via IEEE 1588. With copper-based Gigabit Ethernet implementations, it becomes very complex to almost impossible to reach the subnano range. This fact was certainly one of the reasons why the White Rapid System [2] uses fiber optic networks.

Actually one can only count on a Grandmaster Clock, a Transparent Clock (Switch) and a Slave Clock for small systems or in the development of systems. In reality, however, one must assume connections in which the grandmaster clock must run over several transparent clocks or even non-PTP-compliant switches. All these factors add up over the transport chain from master to slave. In order to improve your own control algorithm, you can try to detect the errors that occur in the individual switching nodes and integrate them into your own control algorithm. For this purpose, the Kalman filter can be used to create a model that takes the individual errors into account. The document [4] “Accurate Time Synchronization in PTP-based Industrial Networks with Long Linear Path” by Daniele Fontanelli and David Macii shows how to model this problem using a Kalman filter. The document also shows that this approach can be used to improve the accuracy of IEEE 1588 systems in this scenario.

ince 1960 Rudolf E. Kalman developed a special method for time-discrete linear systems in order to estimate the states of a system (including its parameters) from noisy and partly redundant measurements. This method was known as the Kalman filter and first published in [3]. Since then, many different variants of the Kalman filter have been published. The following description can also be found in detail in [5].

In order for the Kalman filter to be used correctly, it is necessary that the basic conditions of the measurement system are known. Every classic Kalman filter usually consists of a state space description and the real measurement system with its own system and measurement noise. From this system, the prediction and correction can be calculated with the help of the Kalman filter. Basically, a Kalman filter estimates the output variable ŷ(k) and compares it with the measured output variable y(k) of the real measurement system. The difference Δy(k) between the two values is weighted with the Kalman gain Ḵ(k) and used to correct the estimated state vector ẍ(k). The structure can be described as follows:

Figure 9: Structure of a Kalman filter

The five Kalman filter basic equations can be derived from this structure:

**Prediction:**

**ẍ(k+1) = Ad * ẋ(k) + Bd * u(k)**

**Ṕ(k+1) = Ad * Ṗ(k) * AdT + Gd * Q(k) * GdT with Q(k) = Variance(z(k))**

**Correction:**

**K(k) = Ṕ(k) * CT * (C * Ṕ(k) * CT + R(k))-1 with R(k) = Variance(v(k))**

**ẋ(k) = ẍ(k) + K(k) * (y(k) – C * ẍ(k) – D * u(k))**

**Ṗ(k) = (I – C(k) * C) * Ṕ(k)**

Due to its complexity, derivation is not used. However, it can be read in [3] or [5].

When designing a Kalman filter, it is necessary to describe the physical system continuously over time using differential equations. It defines the output vector y(t). It must be ensured that this output vector contains all noisy quantities. The quantities not affected by noise are described separately in the output vector u(t). When determining the state variable x(t), there can be several approaches. As a rule, these possibilities should be evaluated and the approach that best describes the problem should then be used.

Once all equations and parameters have been chosen, the system is as follows:

**ẋ(t) = A * x(t) + B * u(t) + G * z(t)**

**y(t) = C * x(t) + D * u(t)**

This time-continuous description must then be converted into a time-discrete description. The following formulas can be used for this purpose:

TS is the sampling rate of the system. The matrices C and D remain identical in the time-discrete system.

Several approaches can be used to determine the matrix Gd, such as scanning by the Dirac pulse. The method used must be determined individually.

After the description of the system is available in the state space, the observability can be checked. According to [3] or [5], the criterion for this is that a linear time-invariant system of order n can be observed if the observability matrix SB, or S*B, has rank n. The system of order n can be observed if the observability matrix SB, or S*B, respectively, has rank n. The system of order n can be observed if the observability matrix SB, or S*B, has rank n. But also criteria of Gilbert or Hautus can be used. If the system is not observable, it can be divided into an observable and an unobservable system.

Finally the system noise Q(k) and the measurement noise R(k) have to be described.

**Q(k) = Variance(z(k))**

**R(k) = Variance(v(k))**

In order to use Kalman filters optimally, these two parameters must be determined as accurately as possible. In simple systems, we can speak of almost constant sizes, but this is not the case in the IEEE 1588 environment. It can be assumed that these variables change during runtime and are thus adaptively adapted.

There is a variant of the Kalman filter that works with an adaptive determination of the two covariance matrices: the ROSE filter (Rapid Ongoing Stochastic Covariance Estimation Filter). The principle is based on cyclically re-determining the covariance of the measurement noise R(k) by observing the measurable quantity y(k) using two embedded simple Kalman filters (with constant Kalman gain). Similarly, the covariance of the system noise Q(k) is determined using the value Δy(k) and the measured quantity y(k), the covariance of the estimation error Ṕ(k+1), the covariance of the measurement noise R(k) and a simple Kalman filter.

Looking at the individual steps leading to a complete description of the state space and the determination of the covariances of the system and measurement noise, it can be said that the creation of a Kalman filter is anything but trivial. However, the boundary conditions of the measurement system can be optimally embedded in the Kalman filter, resulting in the best possible correction of the internal IEEE 1588 clock.

The successful use of an IEEE 1588 implementation is not only based on the use of an existing IEEE 1588 stack or special hardware. The problem-oriented approach is the key to success. The possibilities of IEEE 1588 are broad. However, without knowing the accuracy requirements for an IEEE 1588 implementation and the available hardware topology, it is difficult to provide a system that works within the requirements. The analysis in advance is absolutely necessary and requires a corresponding know-how.

The choice of the hardware used is of essential importance, since the individual components influence the accuracy differently. Especially for high accuracies in the subnano range the use of glass fiber based systems is absolutely necessary. The White Rabit project has done good preparatory work and created the appropriate hardware and the necessary boundary conditions. The Intel network chips can also work with fiber optic based PHYs, corresponding hardware is available on the market as consumer goods.

A perfectly tuned control algorithm is indispensable for high-precision systems. The choice of the appropriate algorithm is not easy, requires some know-how and has to be adapted to the conditions of the hardware and its topology. The implementation and simulation of the algorithm, especially with high-precision systems, is a not inconsiderable part of an IEEE 1588 implementation. But it is also the key to the success of such an implementation. Kalman filters are very well suited for the implementation of such efficient control algorithms. The description of the state space and the covariances of the system and measurement noise, however, requires some effort and time.

The article does not claim to present all facets of this extremely complex matter, but should give a small overview of which problems and solutions exist. Each available IEEE 1588 stack is only a tool for implementation. The correct use and implementation of the control algorithm is the actual task.

Author: Peter Plazotta, TSEP

Reference: channel-e