Data distribution centers is the ddc full form. Data distribution centers are a type of centralized computing data management system that serve a specific geographic area. They generally serve a narrow industry or purpose, such as oil and gas exploration or medical imaging processing, with some regions relying on more than one center to serve the region’s entire population. For example, in some areas there is an oil and gas exploration center which provides servers for various companies within the industry while also providing email services to the public at large through its POP3/IMAP servers.
These facilities are often located in different locations depending on their purpose; hospitals will usually contain both data centers for ICT-based objectives (patient records) and medical imaging processing (dicom/cdx). A typical hospital might contain several data centers to serve each of the departments.
One example of a private data center is the Patient Records and Information Center (PRIC) operated by Brigham & Women’s Hospital in Boston, Massachusetts. PRIC is an electronic medical records (EMR) storage and management system (SMR). It was installed at Brigham & Women’s Hospital in 1975 after the passage of the Health Maintenance Organization Act required that hospitals establish SMRs for patient records.
The implementation of PRIC fulfilled this requirement while also permitting access to patient information by providers outside of Brigham & Women’s Hospital. In 2010, a modernized version was implemented with a greater focus on security and interoperability between different institutions’ health care IT systems.
In medicine, distributed data centers are used to support highly distributed data intensive applications, such as medical imaging. One example of a distributed data center in this field is the National Biomedical Imaging Archive (NBIA), a joint project between UCLA and Stanford University. The archive currently contains approximately 2 petabytes of patient images from various medical institutions around the United States. These images may be accessed by researchers through an application programming interface that specifies parameters for retrieving specific types of information from the archive.
In oil and gas exploration, decentralized networks of servers serving geographic areas have been developed because infrastructure is sparsely spread across large geographical areas. In addition, the cost of building centralized facilities has been prohibitive due to high bandwidth requirements in these regions (with some exceptions, like Svalbard), and the price of satellite bandwidth has proved to be relatively expensive. While most oil and gas exploration data centers exist in industrialized countries, the relative lack of bandwidth limits data center solutions in resource-rich regions of developing countries.
The production of oil is generally monitored by an E&P (exploration & production) department within a petroleum organization like Shell or BP. The tasks include seismic acquisition, creating 3D models of rock structures using acoustic waves, well logging, identifying new formations to extract oil from using injection wells (or sour water disposal wells), drilling new wells, etc.
Data produced during this process is typically stored on high capacity hard drives which are physically located at the production site for storage until they are shipped back to an office for processing. The data is typically sent to the office because of limited bandwidth, high latency (i.e., response time), and because it doesn’t require real-time monitoring or analysis.
The oil industry’s Information Technology (IT) department takes care of the servers at their own site(s). They are responsible for the temperature control inside the building, humidity, power supply availability and stability, physical security of all exterior doors and windows, fire safety systems functionality checkup periodically etc. As an example of oil & gas exploration centers in different countries see this list: https://en.wikipedia.org/wiki/List_of_oil_and_gas_exploration_centers .
Each server room usually has a rack that holds all the servers and related equipment. These racks can be set to different heights, with one- or two-person access ladders mounted on each side of the racks. Few people use stairs, because they take up more space and are not very safe (in general, opening a door is required for using such stairs).
Most countries require real estate (land) to be used for production purposes; however exploration centers typically consist of oil platforms located in international waters without land borders. These platforms require skilled labor and expensive materials to construct and maintain them at great depths below sea level: This means that building an oil & gas exploration center provides few jobs and little revenue (relative to the amount of work it takes) compared with building a development or production site.
Oil & gas exploration companies typically use satellite Internet access for their data center interconnect; however some lower-latency networks such as MPLS are available by other means (e.g., fiber landlines, wireless mesh networks). Satellite bandwidth has high latency and is expensive, but it is the only option in many parts of the world where real estate can be used to produce oil instead of provide infrastructure for data centers. The pioneering company in this field was OASYS Technology with its “DeepSeaNet” service during the early 2000s. The technology employed by DeepSeaNet was later purchased and further developed into GlobeXplorer’s “Ocean Carrier Network”. Both services allow users to share unmodified IP packets over satellite through their own private networks.