Plant-wide automation at Elgiloy Specialty Metals. Background of project. Process utilization and efficiency. Production scheduling and monitoring, страница 3

Finally, the plant-wide scheduling and data acquisition system was an integral part of the project scope. The objective was to supply a system which would allow material to be efficiently scheduled within the facility and allow historical data to be collected. This was to be accomplished with a minimum of complexity and expense while still providing the following functions:

·  Receipt of coil primary data information (PDI).

·  Scheduling work orders.

·  Collection of process data.

·  Generate shipping reports.

·  Monitoring work order progress.

·  Creating bill of lading.

Plant network topology

From the earliest stages of the project, it was realized that automation sophistication and proper information flow were critical to the success of the operation. Efficient production scheduling and plant-wide data acquisition were acknowledged as essential components of a successful project. Consequently, these aspects were given primary consideration during initial plant design. A properly layered plant-wide communication system with appropriate data transmission speeds and data availability would be critical. Therefore, a layered control system composed of three levels was chosen. These levels were selected based on both the speed required and the functions performed within them. The control levels are as follows:

·  Level 1 - real-time process controllers.

·  Level 2 - supervisory process control with mathematical models.

·  Level 2.5 - plant-wide scheduling and data acquisition.

The plant network showing the various hardware platforms and control levels is illustrated in Fig. 2. The top plant-wide layer was arbitrarily designated as level 2.5. This was done in order to distinguish it from the higher enterprise resource planning applications which are conventionally associated with levels 3 and above in some models. The philosophy employed in developing this architecture embraced the concept that the hardware and software layers should be as simple as possible. Unnecessary intermediate data concentration was to be avoided along with all the attendant maintenance and software administration tasks. Hence, the direct connection of level 1 process controllers to the level 2.5 supervisory system was adopted.

The plant is constructed with fiber optic conduit, which runs between each process area and the production office. All controllers or network nodes separated by more than 50 metres are connected with fiber optic cable. All other connections within one local area of the plant or office area use 10BaseT copper twisted-pair wiring. Connection types are noted in Fig. 2. Data transmission speed on all network connections is 10 Mbytes per second. Hardware and operating systems chosen for the various servers and operator stations are as follows:

·  Manufacturing supervisory control (MSC) - ALR UNIX Server with Solaris® OS.

·  Work order scheduling system (WSS) - client PC with WindowsNT.

·  Process control system (PCS) - ALR UNIX Server with Solaris OS.

·  Man machine interface (MMI) - ALR UNIX Server with Solaris OS.

·  Operator MMI (rolling mill) HP-XP400 X terminals.

·  Operator workstations - client PCs with Windows NT.

Since the MSC system forms the heart of the level 2.5 layer, special care was taken to make it reliable and robust. Loss of data or a hardware failure on this system would render the plant scheduling capabilities inoperative. In addition, data on coils already in inventory would be lost and data would need to be reentered manually. In order to avoid these potentially serious consequences, the mass storage devices on the MSC were configured in a redundant fashion. The application software and operating system are stored on a two-disk array, which operate in a mirrored fashion (RAID 1 configuration). If one disk fails then the other automatically continues operating with an identical copy of the software. The database is stored on a three-disk array, which allows faster access, increased capacity and mirroring capability (RAID 5 configuration). If one disk fails then the other two continue operating without interruption.