Global Workshare - Intergraph Smart 3D - Administration & Configuration - Hexagon PPM

Intergraph Smart 3D Global Workshare (Oracle)

Language
English
Product
Intergraph Smart 3D
Subproduct
Global Workshare
Search by Category
Administration & Configuration
Smart 3D Version
13.1

A Global Workshare Configuration (GWC) enables you to share all the data within one model structure with remote sites. Designed for companies running models from multiple sites (EPCs or Owner/Operators, for example) or for multiple companies that are working on a single model, the Global Workshare functions involve a single, central database in which all the changes come together as if they were created at the same site.

Pivotal in the sharing of data within a workshare environment are the geographical hubs known as locations. Two types of locations are required in order to share model data among multiple sites: a Host location and a Satellite location. The Host location is a set of one or more database servers on a local area network (LAN) that contains the original set of databases associated with a site. The Satellite location is a set of one or more database servers on a LAN that contains the replicated database associated with a site.

The Host location is created automatically during generation of the site database using the Database Wizard. As such, the Host location is the first location created. It is the site database generation process that also governs such things as the name, name rule ID, and server of the Host location.

On the other hand, Satellite locations are created manually within Project Management using the Database > New > Location Command on the Host. You must have administrator privileges on the Site database to create a new location. After the location is created, it can be associated with permission groups and models as part of the workshare replication process.

In the Global Workshare solution, data sharing between different locations is achieved through real-time model database replication of the entire model at all satellite locations. The catalog and catalog schema databases and the site and site schema databases are maintained on the host server while satellite locations have a read-only replication of these databases. Reports databases are regenerated (not replicated) at each satellite location.

Multiple models (in the same Site and Site schema) can be configured for Global Workshare provided they use the same set of locations as the first GWC. However, not all locations have to be involved in all workshares.

The following diagram illustrates the Global Workshare Configuration:

GlobalWorkshareConfigurationDiagram

The site, site schema, catalog, and catalog schema databases are replicated in a one-way fashion. The one-way replication copies data from the Host database server to each of the Satellite servers, but it does not copy data from each Satellite database back to the site, site schema, catalog, or catalog schema on the host server.

The implications are that all reference data modifications and permission group management must be performed at the Host location for propagation to the Satellite locations.

  • Each Satellite location should have a shared content folder at that location.

  • Inserted reference files, which should be available at Satellite locations, must be located in the SharedContent and be manually distributed to each Satellite location. See Insert reference files in the Common Help.

  • In a local area network (LAN) setup where multiple servers are being used in the same LAN, it is recommended that catalog databases in the host/satellite workshare point to the same SharedContent folder.

  • In a wide area network (WAN) setup where multiple servers are spread across low bandwidth connections (ISDNs, Fractional T1s, and so on), it is recommended that catalog databases in the host/satellite workshare set point to a "close" SharedContent that exists on the same LAN as the database referencing it.

The model database is replicated in a two-way fashion with each Satellite. Data is replicated between the Host and each Satellite. As a result, all Satellite data is sent to the host, and then re-distributed from the Host to the other Satellites. Because of this form of replication, any work performed in the model at any location results in the same data being pushed to all databases that participate in the GWC.

  • After the GWC is established, use the backup tools in the Project Management task to create a backup set of the replicated databases from all locations. In the event of a corruption of data, you can recover the databases participating in the GWC and resume the replication by using any backup in the workshare.

  • Configuring Global Workshare within an integrated environment is a detailed and complicated process. Contact support at http://www.hexagonppm.com.

Network requirements

Global Workshare requires a fractional T1 (256-384 Kbps) connection for large projects. Replicating data between the Host and Satellite is a latency-bound task, so increasing the bandwidth does not increase the replication delivery speed. Increasing the bandwidth can be helpful at setup time, but not over the course of the project.

The network latency between a workstation client and the local database server needs to be as low as possible.

Virtualization

It is possible to use virtual servers to implement a database server. You must test and verify that the environment is suitable for a production project and that the configuration allows you to reach your milestones on time. Performance or incompatibility problems could delay you. In most cases, the major performance bottleneck is caused by poor I/O which could be the result of improper hard drive configuration or overloading shared resources in the virtual server.