行货Thinkpad电脑报价

联想笔记本报价

微软平板电脑报价

ipadAir2/Mini3报价

苹果电脑报价

行货苹果电脑报价

IBM/HP二手电脑

苹果Watch报价

办公地址|联系方式

公司简介|汇款帐号

选择MD购机的理由

淘宝网店交易公告

网上下订单的链接

  
返回列表 发帖

translation

Design issues for Ethernet in Automation
Lucia Lo Bello,Orazio Mirabella

Abstract - In modern process control systems the different activities are organised by structuring plant as a hierarchy of functional levels. In order to take into account the different requirements of the traffic exchanged at the various levels in the hierarchy, communication architectures are also organized as levels of networks using different communication protocols according  to  their  specific  operating  conditions.  The hetero geneous nature of these networks, however, causes problems of incompatibility and complicate the exchange of information  between  different  levels.  For  this  reason, manufacturers and users are currently showing great interest in harmonising the communication infrastructure at the plant level, and Ethernet technology seems to be the most natural candidate for a number of practical and economic reasons. This paper addresses some issues about the use of Ethernet as a single network technology in a process control system, especially focusing on how to improve timeliness in the delivery of real-time data over a Shared Ethernet network used at the Field level. Two suitable approaches are presented and evaluated using simulation. One of them is priority-based, while the second is based on adaptive traffic smoothing.
I.INTERODUCTION
Modern process control system organise plant activities hierarchically, thus creating a separation between different functional levels that will allow material and information flows to be handled efficiently. It is therefore necessary to structure the communication architectures hierarchically as well, to take different traffic requirements into account. Modern process control systems usually have three levels, as shown in Fig. 1.

  The highest is the Backbone level, serving the whole plant and allowing traffic to be exchanged for high-level management of the various activities (from planning to acquisitions management, from design to testing, etc.). At the next level down we have Cell-level networks, that support both real-time and non real-time traffic. At the
lowest level there are Fieldbuses, which mostly support real-time traffic with more or less urgent deadlines.
According to the specific operating conditions and traffic requirements, different communication protocols are currently used at the various levels. This obviously leads to problems of incompatibility, complicating the exchange of information between networks at different levels. Both manufacturers and users are therefore making great  efforts  towards  an  harmonisation  of  the communication infrastructure at the plant level.
   In this context, Ethernet is achieving a leading position, proposing itself as a network capable of supporting all communication needs at all levels in the control hierarchy.
  At present, Ethernet is generally adopted at the Cell level (in particular in systems with slow-dynamic processes), where non deterministic access times can be well tolerated.
   At the plant Backbone level, FDDI [1] and ATM [2] are still widely used, but Ethernet technology is gaining ground in this scenario as well, thanks to the availability of Fast Ethernet [3] and Gigabit Ethernet [4], which offer a number of advantages over other technologies. The main advantage is the great availability of manufacturers, which means that Ethernet boards and devices such as switches and hubs are cheaper than other technologies. Secondly, for technicians or network administrators who are already familiar with the traditional 10 Mbps Ethernet, the change to Fast Ethernet or Gigabit Ethernet only entails an increase in know-how as compared with the greater effort required to acquire expertise in other technologies.
   Finally, at the field level, the possibility of using Ethernet is also very attractive, because the use of a single network makes it possible to overcome the inter- communication and interoperability problems that arise in current systems when different kinds of Fieldbus systems are used. Moreover, the bandwidth offered by an Ethernet is typically higher than that offered by Fieldbus systems.
  [However, as field devices mainly exchange small-size data, due to the overhead introduced by the Ethernet frame size the data throughput obtained is significantly less than the available bandwidth.]
   Typically, the topology suggested today by manufacturers of communication systems for process control is based on an extensive use of switches. This solution allows the designer to cluster stations into separate collision domains (as a limit case, even a separate switch port for each field device is recommended).  This  solution, however,  is not only expensive, but also does not allow for optimal usage of the
bandwidth, as only a small portion of the bandwidth offered by each switch channel is actually used by a single field device. Moreover, the presence of the switches prevents the stations which monitor the network fi.om directly accessing the data, thus limiting their action.
This paper addresses some issues about the use of Ethernet as a single network technology in a process control system. First the paper outlines how to use switches and hubs for the interconnection of the various nodes/devices in order to simplify data exchange while providing, when required, real-time performance. Then, the paper focuses on how to improve the timeliness in the delivery of real-time data over a Shared Ethernet2 used at the Field level. Two different approaches are presented and discussed. The first one introduces two different priority levels in the Ethernet protocol so as to privilege real-time traffic. The second approach, called adaptive traffic smoothing, is based on dynamic traffic smoothing  [5][6],  which consists  of dynamically assigning each station a portion of bandwidth according to the current network workload. Simulation-based performance evaluation will show that both the approaches, and the priority-based one in particular, can significantly improve timeliness in delivering real-time traffic and open up new application scenarios for Ethernet.
II.NETWORK ARCHITECTURE
  As said before, in modem process control systems there are typically three network layers supporting traffic with different requirements. Here, we'll discuss how Ethernet can be used at all three levels, as long as appropriate design choices are made. At the Backbone level, where the most important performance parameter is throughput, a Fast Ethernet switch can be used, with a number of ports equal to the number of Cell networks (Fig.2).

The large amount of bandwidth offered by the switch can easily support the typically high traffic flows at this level (for example, originated by company database backup operations or file transfers to download applications). At the Cell level, on the other hand, the performance parameters of interest are both throughput and timeliness, as Cell networks have to support communications between devices (typically workstatimls) and between various Fieldbuses which may have real-time constraints. Thus the choice of the network architecture is mainly based on these two parameters. In addition, it may be useful to consider other aspects as well, :such as the cost of network devices and simplicity of configuration.
At the Cell level the network can be collapsed either in a hub or a switch, as shown in Figs. 3b and 3a. In the first case (Fig. 3b), the Cell network: and all the Fieldbuses will constitute a single collision domain and so, as the total traffic volume (i.e. Cell + Fieldbus traffic) increases, there may be a performance degradation for both the system as a whole and the single Fieldbuses. On the other hand, the use of a switch, as shown in Fig. 3a, creates separate collision domains between the various Fieldbuses, each of which only has to handle the traffic related to its nodes. This solution reduces the interference between different Fieldbuses, so the use of a switch appears to be preferable to that of a hub. Although a switch may cost more than a hub, this is not of great significance, due to the low number of switches required for networks at the Cell level.

   Moreover, the use of switches at the Backbone and/or Cell level enables to exploit some features that recent advances in switching technology offer [7]. For example, thanks to their large number of ports, switches make micro-segmentation  (where  a  single  station  has  a dedicated switch port) a very attractive way of improving
real-time performance, especially when it is combined with full-duplex operation (in 'this case, as contention and collisions are removed, there is no need for CSMA/CD and Binary Exponential Back-Off, respectively).
  Another interesting feature offered by switches is the capability to build Virtual LAN (VLANs) [8], which allow stations on multiple physical LAN segments to communicate as if they were on a common LAN without being constrained by their physical location. This characteristic, together with a chance for flexibility and mobility of stations across multiple LAN segments, can be really appealing at both the
Backbone and the Cell level. At these levels, in fact, the various network compartments do not operate in total isolation (as nodes belonging to different compartments or manufacturing cells can need to exchange data), and it may also happen that stations move across network segments. In addition, Switched Ethernet technology provides fault- tolerance mechanisms, such as spanning trees and port thinking, which can enhance dependability at the upper layers of the automation hierarchy. In addition, IEEE 802.1p [9] specification also gives Layer 2 switches the ability to support different traffic classes using priorities (priority fields have been defined by the IEEE 802. lQ specification). In order to handle traffic priorities, switches should employ multiple buffer queues for each output port, so a multiple- queue hardware is required. In addition, even if the introduction of priorities  can improve data handling according to its temporal constraints, in full-duplex switches congestion, which causes load to converge on a given port, remains a problem. For this reason, flow control capabilities have been introduced in switches technology. Basically, flow control eliminates dropped packets by "warning" stations which are overloading. However, flow control reduces, but does not eliminate, the need for buffers, so related problems, such as switch buffers sizing and full buffers handling, have to be dealt with in the network design.
   At the Field level, where both real-time (with more or less critical time constraints) and non-real-time traffic is exchanged, although micro-segmentation can guarantee deterministic  and reduced  access  times  [10],  it  is nevertheless quite expensive, due to the high number of switches needed at this level. Such an approach is not even fully justified, as some of the attractive features of Switched Ethernet above described become less significant in the context of field devices interconnecfion. For example, micro-segmentation wastes a large amount of bandwidth, as in general a field device actually requires only a small portion of the bandwidth offered by each switch channel. To better utilise the bandwidth, therefore, a hub could be used instead of a switch, but to obtain the required performance levels (in terms of timeliness in data delivery) not only the network has to be accurately sized, but also suitable channel access mechanisms to limit packets delay (as, for example, those described in Sections 3 and 4) are needed.
III.A PRIORITY-BASED APPROACH
  When a network has to support various kinds of traffic featuring different time requirements, priority is an ideal mechanism to privilege traffic with more urgent time constraints. Ethernet does not provide for priorities, but it is possible to introduce two different priority levels by differentiating between the interframe gap of high-priority traffic and that of low-priority traffic. Here, in order to distinguish between the two priorities, the interframe gap used for high-priority traffic is the same as in the CSMMCD standard [11], while that for low-priority traffic is increased by one slot time. Thanks to this additional delay, low-priority frames have a longer observation interval than high-priority frames, so they have more time to  sense  whether a high-priority transmission is in progress or not, and can therefore avoid a collision.
  Of course, this approach requires to modify the standard protocol [11]. However, this is not a major problem, as what is required is just a modification of the internal configuration  of the  Ethernet  board.  For  board manufacturers this simply entails modifying the board drivers, by allowing for a dynamic reconfiguration according to the nature (real-time or non-real-time) of the frame to be transmitted.
   Another point worth mentioning is that the addition of a delay slot could penalise non real-time traffic. However, while for Ethernet networks used in general-purpose environments a slot time is over 50 microseconds to take the maximum bus length into account, in a process control network the length of the bus can be reduced, so the slot time can also be proportionally shortened. As a result, the time overhead introduced to implement priorities can therefore be considered to be negligible if compared with the transmission time of low-priority frames, which are generally of considerable size.
   However, the proposed approach has some limit. First, it privileges real-time traffic over non real-time traffic, but it does not reduce the risk of collisions between traffic of the same kind, for example between real-time frames. A solution  to  this  is  to  introduce  a re-transmission persistence [11] of less than one for real-time frames, so as to reduce the risk of simultaneous transmissions. For this reason, in Sect 5.1 we present the results obtained in some simulations carried out with varying values of persistence for real-time traffic.
   Second, as the observation interval for non real-time traffic is measured from when the station notices that the bus is idle, if there is no real-time frame already in the output buffer when the current transmission on the bus terminates, it is as ifthe observation interval were reduced, and so the risk of non-real-time traffic colliding with real-time traffic increases. A solution to better tune the mechanism could be to differentiate the exponential back-off algorithm according to the type of traffic, so as to reduce the delay intervals for real-time traffic and extend them for non-real-time traffic.
  Despite these limits, the solution proposed in this Section offers a number of advantages, as it is shown by the performance obtained, which is presented and discussed in Sect.5.
IV. A TRAFFIC SMOOTHING-BASED APPROACH
  The idea of traffic smoothing was introduced in [12], where it was demonstrated that, under certain conditions, it is possible to implement a real-time statistical channel on an Ethernet. In general, a real-time statistical channel is defined as a one-way virtual circuit that guarantees the timely delivery of packets in statistical terms. However, when dealing with Ethernet networks, as the delay a  packetundergoes depends on the number of attempts needed to transmit it successfully, the definition of real-time statistical channel has to be adapted so as to take collisions, or the number of attempts made before a successful transmission, into account.
  Consequently, a frame on an Ethernet real-time statistical channel has to meet the following condition:  

where n is the number of attempts or re-transmissions needed for the frame to be successfully transmitted, counted from when it reaches the Application layer interface, K is a given number, while Z is a threshold called the loss tolerance.
  Condition (1) can also be expressed in terms of delay, as follows:

  where Dk is the worst case delay affecting a frame successfully transmitted at the K-th attempt.
  In [12] it was analytically demonstrated that for (1) and (2) to be met it is sufficient to keep the total arrival rate for new packets generated by stations below a threshold called the network-wide input limit. In order to maintain such a threshold without the need for each station of continuously monitoring traffic throughout the network (being the Ethernet MAC protocol totally distributed, a single station is not aware of the current packet arrival rate for the whole network), in [12] each station is assigned afixed portion of the network-wide input limit, called the station input limit. Each station thus regulates the packet stream arriving from the Application layer in such a way as to keep the packet arrival rate at its MAC sub-layer below the station input limit it has been assigned5.
   Traffic smoothing only acts on non real-time packets to smooth traffic bursts, as when packets arrive in bursts they are more likely to collide. A real-time packet is therefore not affected by smoothing. Thanks to traffic smoothing, the packet arrival process can be modelled as a Poisson process, a basic condition for the analysis made in [12] (which is based on a semi-Markov model of the CSMA/CD with an exponential backoff algorithm).
  Traffic smoothing is realised locally in each Ethernet station by a software layer called a traffic smoother, inserted between the TCP/IP and the Data Link layer (as it is shown in Fig.4), which buffers any non real-time packets arriving in a burst and sends them in such a way that their arrival at the MAC layer is staggered, thus keeping the arrival rate below the station input limit.

The traffic smoother is implemented by using a leaky bucket-based algorithm [13], where a credit bucket depth (CBD), which indicates the capacity of the credit bucket, and a Refresh Period (RP) are defined. Every RP seconds, up to CBD credits are replenished. Ifthe number of credits exceeds the value of the CB[), any excess credits are discarded.
  When a non-real-time packet arrives from the IP layer, if there is at least one credit in the bucket the traffic smoother sends it to the Ethernet Network Interface Card (NIC) and removes a number of credits equal to the size in bytes of the packet. If there are too few credits in the bucket for the size of the packet, they are "borrowed", so the number drops to a negative value. If, on the other hand, the number of credits in the bucket when a packet arrives is less than or equal to zero, the packet is not transmitted to the Ethernet NIC Ethernet until at least one credit becomes available following a replenishment.
  As said above, a real-time ]packet is not affected by smoothing, but its transmission does consume credits. This means that if there is both real-time and non real-time traffic in a station, the latter is transmitted using any credits that are left over after the transmission of the real-time traffic for that station.
   The CBD/RP ratio is the station input limit and determines the average throughput available for a station. Consequently, by varying the wdues of RP and CBD, it is possible to control the bursty nature of a flow of packets and thus of the traffic generated by the single stations.
   As in [12] each station is assigned a fixed portion of bandwidth, both CBD and RP are constant and the smoothing thus applied is said to be static. This solution avoids the burden for each station of monitoring the traffic produced by all the other stations, but it has the drawback of not making optimal use of the available bandwidth. The station input limit is, in fact, assigned to each station once and for all, irrespective of the actual load currently on the network (which can even be significantly below the network-wide input limit, as not all stations are necessarily transmitting at any one time). This solution thus entails a considerable waste of bandwidth. Also, scalability problems may arise when the number of connected stations is high.
  The limits of static smoothing are overcome by dynamic smoothing, which uses the idea of dynamically modifying (shaping) the station input limit a station is assigned, according to changes in the network load. In this way the bandwidth available for each station is no longer constant as it is with static smoothing, but varies dynamically. This improves bandwidth exploitation and also scalability.
   However, for this to be possible it is necessary to know the workload on the network at any one time. The problem of acquiring knowledge of the current network workload is solved by activating a suitable user process (called a sniffer ) in each station to monitor the global traffic trend. This is possible because in Ethernet-based networks a transmitted fame is listened to by all the other stations.
   In order to evaluate the network load for dynamic smoothing purposes,  different approaches have been proposed in the literature. For example, in [5] two approaches where a feedback mechanism is activated based on the measurement of either throughput or the number of collisions in a certain time interval are described. Another approach,  which  uses  the  harmonic-increase  and multiplicative-decrease (HIM/)) algorithm, is described in [6]. According to the HIMD adaptation, the RP is doubled when a packet collision is detected, while in absence of collisions it is periodically increased by a constant. The approach in [6] applies the HIMD adaptation to react to the detection of a single collision over an interval tz. Reacting to a single collision increases the responsiveness of the smoother, but can cause instability problems.
   In this paper we will deal with adaptive smoothing based on throughput control, which differs from the dynamic smoothing proposed in [5] for the different adaptation mechanism used. Here, the bandwidth available for non real-time traffic is limited by adapting the refresh period, RP, to the throughput measured over an interval ~, while keeping the CBD value fixed. If the throughput value exceeds the pre-established threshold, the RP is increased by the maximum between twice its current value and a given RPmax value. If, on the other hand, the throughput does not exceed the threshold, the RP is decreased by a quantity A (equal to 50% of the initial RP) down to a value of RPmi~. An adaptive smoothing based on the number of collisions would work in the same way, but what is counted is the number of collisions a station is affected by in the interval x.
   In the following Section we will evaluate the performance of the approach based on two levels of priority, which has been proposed in Sect.3, and that of the one based on adaptive smoothing, so as to provide elements that will be of use in sizing the network.
  To evaluate the performance of Ethernet at the field level the network configuration chosen was the one shown in Fig.5, i.e. a collapsed Fieldbus made up of one hub and
8 stations at a distance of 100 metres from each other, 4 of which only generate real-time traffic and 4 only non real-time traffic6.
  As the aim here is to highlight the peculiarities of the two approaches discussed in this paper, i.e. the one based on priority and the one on traffic smoothing, they were simulated  under  different  traffic  conditions.  More specifically, to assess the capacity of the first approach (with 2 levels of priority) to meet the requirements of real-time traffic even in the presence of a heavy workload, the following scenario was chosen:

4 stations transmitting high-priority (i.e. real-time) traffic, each generating 500Kbit/s of cyclic traffic and 320kbit/s  of acyclic traffic,  giving a total of   3.28Mbit/s.
4 stations with low-priority (i.e. non real-time) traffic, generating 1 Mbit/s of acyclic traffic, with two bursts of 3,25Mbit/s generated by two different stations. The first burst lasts between 20 and 30% of the simulation time, the other between 60 and 70%.
On the other hand, for simulations with traffic smoothing a slightly different scenario, featuring a more significant presence of bursts, was chosen. This made it possible to highlight the ability of this approach to limit the traffic on the network and the consequent benefit on real-time traffic. The scenario chosen was as follows:
~  4  stations  transmitting  real-time  traffic,  each generating 500Kbit/s, giving a total of 2 Mbit/s.
~  4  stations  transmitting  non  real-time  traffic, generating 500 Kbit/s of acyclic traffic, with a burst of 5 Mbit/s generated by two stations, one between 20    and 30% and the other between 60 and 70% of the simulation time. In this way the non-real-time traffic varies between 2 and 9 Mbit/s (the last is a peak which is only reached when there is a traffic burst).
   In both simulations, the main network parameters were those of the 10 BaseT standard as regards data rate, minimum frame length, header size and interframe gap. The average frame size was 1600 bit for real-time traffic, 2500 bit for non-real-time acyclic traffic, while the maximum size, i.e. 12208 bit, for bursty frames was chosen, in order to model file transfers able to saturate the network. Each simulation lasted 100 seconds and the results were obtained by calculating the average of three single simulations, so as to reduce effects deriving from random factors.
   In the graphs shown in the following the time traces refer to average values obtained over intervals of 10 milliseconds. It should also be pointed out that in the graphs the throughput in bits is always lower than the workload, even when no frames are lost, because the throughput values only refer to the data field of a frame, without the headers (i.e.this is an effective throughput).
5.1. Priority-Based Approach
  To improve the high-priority traffic performance, the persistence value was set equal to one for low-priority traffic, and to 0.6 for the high-priority one. This reduces the number of collisions for the high-priority traffic without affecting the low-priority traffic.
  Fig.6 shows the overall throughput (top) and the one for high-priority traffic alone (bottom). High-priority traffic remains constant, even in the presence of the two low-priority bursts that saturate the network up to over 8 Mbit/s, as in this approach the real-time traffic is not affected by the non real-time one. Fig. 7 shows channel access times for high (top)- and low-priority frames (bottom).

It is interesting to note fl~at, ciespite fiae channel is saturated, the presence of the bursts does not significantly affect the channel access time for high-priority traffic (the unit of measurement on the y ;axis is microseconds), which is only delayed by the fact that the bus is busy with the long burst frames. On the other hand, the burst has a decidedly negative effect e,n the low-priority traffic, producing very long access times (hundreds of ms). Finally, it should be pointed out that, thanks to the priority mechanism, the access time for high-priority traffic is lower than 180gs even with ma average workload of over 6 Mbit/s.
  Graphs in Fig. 8 show the distribution of collisions over the whole of the simulation (high-priority at the top, low-priority at the bottom). Note that low-priority stations are affected by a consistent number of collisions, mainly due to the presence of very long frames in a burst, which make the other low-priority stations to try and retransmit simultaneously (i.e. one slot time after the channel has become idle).
  For high-priority stations, on the other hand, the trend is much more favourable, as the number of frames affected by collisions quickly drops.

  At this point it is interesting to see whether it is possible to improve the real-time performance even further by varying the persistence (indicated with P) in order to avoid collisions between real-time frames. We therefore ran three simulations, keeping the same scenario as before, but varying the high-priority traffic persistence between values of (a): 0.4 - (b):0.2 - (c): 0.05. The curves in Fig.9 show the access delay obtained for high-priority traffic alone.
   An improvement is observed with lower persistence values only during the workload peak due to a burst, but there are no significant differences in stationary conditions7 (i.e. when there no bursts are present).
   Another interesting result about the effect of persistence on real-time traffic is gained by examining the collisions graph for the global traffic given in Fig.10. With higher persistence values, the number of collisions is greater and even caused frames to be missed. The situation improves with lower persistence values, as the number of collisions decreases and no frames are lost due to an excess of collisions.
   This result would appear to contradict the delay curves in which no difference is observed in stationary conditions.
   Let us, however, examine Fig. 11, which gives the distribution of the delays for cyclic traffic versus a threshold (deadline) set to 5 ms. This situation is representative of a system in which the periodic traffic (produced by sensors, for example) has to be used by a consumer (e.g. a PLC) before a new scan cycle starts.



As can be seen in Fig. 11, the delivery time is always below the threshold and only sporadically does a sample go beyond this limit. If we then compare the delay distributions with persistence values of 0.4 and 0.05, we see that a lower persistence corresponds to a steeper curve, indicating a better delay distribution.
  In addition, with P= 0.05 the number of frames with a very low delay is higher than the one obtained with P=0.4.
  This result makes the approach even more attractive, as by appropriately choosing the persistence value it is possible to improve access delay performance even further.
5.2 Traffic Smoothing-based approach
  The aim of this series of simulations was to evaluate the performance of smoothing based on throughput control.
  The following parameters were used:
    -   CBD equal to 3000 bytes,
    -   RP initially set to 0.01 seconds and varying in a range of[0.008, 0.1] s,
    -   parameter control period equal to 2 ms,
    -   throughput threshold of 5.7Mbit/s for non real-time traffic.
  These values limit the throughput for the whole of the network traffic to about 6 Mbit/s for non real-time stations. Thus a single station can initially send 2.4 Mbit/s of traffic, with peaks of 3 Mbit/s (when there is a low overall workload), with a lower bound of 240 kbit/s if the workload limit is exceeded.
  From the graphs referring to throughput (Fig. 12), it can be seen that the maximum overall throughput (top) is greatly limited below the network's capacity, so it takes a much longer time to handle a burst than it does with the priority-based approach (Fig. 6), where the peak throughput could reach 8 Mbit/s.

It should also be pointed out that the throughput for real-time traffic only (Fig. 12- bottom) remains constant even in the presence of non real-time bursts, as real-time traffic is not smoothed and as the network: is never congested.
  As far as channel access is concerned, in Fig. 13 we see that in stationary conditions, real-time traffic has an access time of about 120 gsec, which is comparable with the result obtained using the priority-based approach. This can be accounted for by the fact that the workload is similar in the two cases. If, however, we examine the delay during a burst we see that in the priority-based approach the delay is lower (Fig. 7), even though the overall throughput is higher (8 Mbit/s versus 6 Mbit/s).
  This derives from the fact that in the priority-based approach non real-time traffic, which has a lower-priority, does not disturb the real-time traffic. In the smoothing-based approach, on the other hand, the real-time and non real-time traffic compete for access to the bus according to the traditional CSMA/CD. Fig. 13 (bottom) shows that the delay values for non real-time lxaffic are very high during the burst due to the limits imposed by smoothing.
  The different behaviour of the two approaches is also revealed by a comparison between Fig.8 and Fig. 14, which shows that the distribution of collisions obtained with adaptive smoothing for real-time and non real-time traffic are similar, thus demonstrating that in this approach the two types of traffic compete and collide more than in the priority-based one.
  Finally, Fig. 15 shows the distribution of delays for real-time traffic versus a threshold of 5 ms. Comparing such a distribution with that obtained using the priority-based approach (which is shown in Fig. l la), we see that smoothing features higher delay values and that some frames even exceed the threshold.



                            VI. CONCLUSIONS
   This paper has discussed the possibility of using Ethernet to support the different communication requirements at the various levels of the control hierarchy in process control systems, focusing in particular on the use of Shared Ethernet at the Field level. Two channel access mechanisms to improve  Ethernet's  ability  to  support  time-critical information exchanges have been introduced and evaluated via simulation. The results obtained have shown that the first mechanism, which is based on two different levels of (static) priority, offers more isolation between real-time and non real-time traffic than the second one, which is based on traffic smoothing. Thus, the priority-based approach appears more suitable for those environment in which such a property is desirable. Moreover, better performance, especially in terms of access delay, can be obtained from the priority-based approach through an appropriate choice of the persistence value for real-time traffic.
                    VII. ACKNOWLEDGMENTS
     The Authors acknowledge the anonymous reviewers for their useful comments.
                           VIII. REFERENCES
[1] ATM Forum, ArM Forum Traffic Management Specification, Version 4.0, April 1996.
[21 American National Standard for Information Systems, Fiber Distributed Data Interfaces (FDDl)-Token Ring Media Access Control (MAC), ANSI X3.139, 1987.
[3] St. Clair J, I00 Base-TSpecification, Fast Ethernet Alliance, 1994.
[4] Gigabit Ethernet Alliance, http://www, gigabit-ethemet.org, 1999.
[5] L. Lo Bello, M. Lorefice, O. Mirabella, S. Oliveri, "Performance Analysis of Ethernet Networks in the Process Control", in Proc. Of the 2000 IEEE International Symposium on Industrial Electronics ISIE'2000, Puebla, Mexico, Dec. 2000,
[6] S.  Kweon,  K.  G.  Shin,  Q.Zheng  "Statistical  Real-Time Communication over Ethernet for Manufacturing Automation Systems",inProc. of the  Fifth  IEEE  Real-Time Technology and Application Symposium, June 1999.
[7] M. Atvas, E. Tovar, F. Vasques, "Ethernet Goes Real-Time: a Survey on Research and Technological Developments", Tech. Rep. HURRA Y-TR-O001, Polytechnic Institute of Porto, Jan 2000.
[8] IEEE 802.1Q, 1998 IEEE Standard for Local and Metropolitan Area Networks: Virtual Bridge Local Area Networks.
[9] (ISO/1EC) ANSI/IEEE Std 802.1D, 1998 Edition, Information Technology--Telecommunications  and  information  exchange between systems--Local and metropolitan area networks- Common Specification- Media Access Control (MAC) bridges.
[10]S. Roping, E. Vonnhame, J. Jaspemeite, "Analysis of Switched Ethernet Networks with Different Topologies Used in Automation Systems ". m Proc. of Fieldbus Conference (FeT'99), Magdeburg, Germany, pp.351-358, Springer-Verlag, Sept. 99.
[11]IEEE   802.3,1998   Edition,   Information   Technology-- Telecommunications and information exchange between systems--Local and metropolitan area network--Specific requirements--Part 3:  Carrier  sense  multiple  access  with  collision  detection (CSMA/CD) access method and physical layer specifications.
[12]S. Kweon, K. G. Shin, G. Workman, "Achieving Real-Time Communication over Ethernet with Adaptive Traffic Smoothing", in Proc. of Sixth IEEE Real-Time Technology and Applications Symposium. RTAS 2000, pp.90-100, Washington DC, USA, June 2000.
[13] R. L. Cruz, "A Calculus for network delay, Part 1: Network Elements  in  Isolation", IEEE Trans.  on Information  Theory, 37(1):114-131, Jan 1991.

返回列表
水货IBM IBM水货 IBM水本 北京水货 北京水本