Labels
if you would have read about the Previous Articles about the project & Program
Portfolio : Portfolio is general terms that used for to define the complete wing of any product or any operating model. Certainly yes.
Portfolio is combination of multiple Programs and Project which are individual or interlinked with a defined Scheduled time , Quality and Budget . it could be a whole product or Customer facing Service.
you might have read about “what is project “ – again lets put it Simple agian
A set of Project’s which are interlinked or Individual to each other which needed to delivered in time,Budget and with defined quality. Multiple set of project called as Program.
lets be simple – Project is a set of deliverables which need to be delivered in time,in budget,with high quality by following all necessary compliances .
Protection from power loss is a common characteristic of datacenter facilities. Such protection comes at a significant first cost price, and also carries a continuous power usage cost that can be reduced through careful design and selection.
Objective 1: Design UPS system for efficiency. The electrical design impacts the load on and the ultimate efficiency achieved by the UPS. (See chart illustrating typical power flows in data centers.) Strategies: Maximize Unit Loading. When using battery based UPSs, design the system to maximize the load factor on operating UPSs. Use of multiple smaller units can provide the same level of redundancy while still maintaining higher load factors, where UPS systems operate most efficiently.
Metric: Average UPS loading. Battery based UPSs should be loaded to 50% or greater in actual operation.
Objective 2: Select most efficient UPS possible. UPS efficiencies vary, not only across different types of topographies, but also within the same type of system between different models. Strategies: Specify Minimum Unit Efficiency at Expected Load Points. There are a wide variety of UPSs offered by a number of manufacturers at a wide range of efficiencies. Include minimum efficiencies at a number of typical load points when specifying UPSs. Compare offerings from a number of vendors to determine the best efficiency option for a given UPS topography and feature set.
Evaluate UPS Technologies for Most Efficient. New UPS technologies that offer the potential for higher efficiencies and lower maintenance costs are in the process of being commercialized. Consider the use of systems such as flywheel or fuel cell UPSs when searching for efficient UPS options.
Do Not Overspecify Power Conditioning Requirements. In general, the greater the level of power conditioning used, the lower the system efficiency. Consider line-reactive UPSs for standard server equipment that does not require the higher level of power conditioning offered by double-conversion units. Some manufacturers offer UPS systems that can operate in both single and double conversion mode, allowing for flexibility to deal with unanticipated future equipment or power conditions. Do not use additional power conditioning in the PDU if it is not required.
Metric: UPS Efficiency. UPS efficiency should exceed 90% at full load and 86% at half load. See chart for range of UPS found on the market today.
Objective 3: Use self-generation for large installation. For large facilities, a self-generation plant designed to capture and utilize waste heat can achieve very high total efficiency as well as offering great control over power reliability and quality. Strategies: Eliminate Standby Generator. Standby generators are typically specified with jacket and oil warmers that use electricity to maintain the system in standby at all times, so even if (especially if) they are never used they are a constant energy waste. Eliminate of a standby generator requires proper engineering of the system, but will reduce energy use as well as save first and maintenance costs. Standby generator heaters (many operating hours) use more electricity than the generator will ever produce (few operating hours).
Recover Waste Heat for Local Heating Uses. If the datacenter is physically near a commercial or industrial heat load, use the waste heat directly to serve the load. Use of the waste heat directly for heating is the most efficient way to optimize total self-generation system fuel efficiency.
Recover Waste Heat for Datacenter Cooling Use. Waste heat can be used to drive absorption or adsorption chillers, maximizing the utilization of the fuel used to power the generator system.
Eliminate UPS Systems. With careful design, UPS systems can be replaced by self-generation equipment. Combined with recovering waste heat, such a system combines high efficiency with no UPS efficiency losses.
Metric: Utilized Electricity and Heat / Input Fuel Btus Include electricity use and legitimate uses of waste heat. Include electricity use and legitimate uses of waste heat.
The IT equipment is the reason for the facility. Increasingly, there are reasonable opportunities to increase the efficiency of IT equipment, reducing the need for mechanical infrastructure and ongoing energy use directly at the load level through the selection of IT equipment.
O
bjective 1: Specify efficient server equipment. Reducing the energy use of the load, IT equipment, directly is possible and has multiple cost savings — lower total infrastructure requirements, lower cooling loads, lower electricity consumption, smaller UPS, etc. Strategies: Specify High Efficiency Power Supplies. Server power supplies can be made significantly more efficient than currently offered. Specify power supplies that have a minimum efficiency equal to or greater than the minimum recommended power supply efficiency guidelines put forth by the industry group Server System Infrastructure (SSI) Initiative.
Consider Equipment Power Consumption in Specifications. Develop internal procedures to encourage the acquisition of the most energy efficient equipment that will meet the application requirements. Lower power consumption chips, idle mode setbacks and other features can vary the power efficiency of equivalent equipment. Reference: new chips from SUn
Metric: Power Supply Efficiency. Power supplies in IT equipment should be better than 80% efficient from 20% through full load.
Objective 2: Use cooled equipment racks. Incorporating waste heat rejection directly into the IT equipment racks can allow for it to be collected and rejected very efficiently. Strategies: Use Equipment Racks with Integral Coil. Transferring IT equipment waste heat to a cooling water loop directly at the rack allows for the complete elimination of heat recirculation. The heat is captured prior to mixing with the room air at a higher air temperature. This allows a correspondingly higher cooling water temperature to be used in the cool, allowing significant plant efficiency opportunities. Since IT equipment heat is removed prior to it being mixed into the room air, a much smaller room conditioning system can be used — a standard office system with the addition of humidity control and enough capacity to cool racks that are open for servicing can be used.
Consider Direct Liquid Cooling. Direct liquid cooling options range from the mature technology of providing water passages through chip heatsinks to new approaches under development, ranging from using heatpipes to conduct heat from a chip to a liquid reservoir to spraying whole boards directly with an inert dielectric cooling fluid. Direct liquid cooling is far more efficient than using air cooling, which requires about 3,500 times the volume to remove the same quantity of heat. Where direct liquid cooling options are offered, they should be carefully considered for their significant fan power savings and synergy with medium temperature chilled water loops and waterside economizer free cooling. More info [download].
Metric: Watts cooling device / rack Watts cooled. The combine wattage of all fans (including small, in-rack fans) and pumps utilized by the liquid cooled rack to transfer heat to the chilled water loop divided by the actual watts of rack cooled should be less than 0.202 (0.71 watts per ton of delivered cooling).
Humidification specifications and systems have often been found to be excessive and/or wasteful in datacenter facilities. A careful, site specific design approach to these energy-intensive systems is usually needed to avoid energy waste.
Objective 1: Design system to actual equipment requirements. Datacenter benchmarking has found a wide range of operation temperature and humidity setpoints in use at datacenters. The choice of room setpoints should be based upon actual equipment requirements and efficiency opportunities. Strategies: Use Widest Suitable Humidity Control Band. The tightest humidity control band recommended by ASHRAE is 40-55%, and larger humidity control bands are successfully used by datacenters. Over specifying the required humidity control significantly increases first cost and long term operating and maintenance costs.
> Specify Humidity Sensor Calibration Schedule. Humidity sensors tend to drift and require more frequent calibration than temperature sensors. An incorrect humidity sensor is less likely to be noticed than an erroneous temperature sensor, which could lead to extended out of specification datacenter humidity levels and excessive humidification energy costs. Regular sense calibration (or replacement) is required to maintain accurate humidity control.
> Provide Appropriate Sensor Redundancy. Humidity control is only as good as the humidity sensors used. To maintain very tight humidity control, multiple humidity sensors should be used. The use of a minimum of two humidity sensors increases the opportunity.
>Control Humidity with Dedicated Outdoor Air Unit. Typically, ventilation air represents the majority of humidity load in a datacenter. Controlling humidity with a single ventilation air handler (or a pair of redundant units with a common control loop) is a common method of eliminating simultaneous humidification and dehumidification.
Metric: kWh or therms per pound of humidification.
Objective 2: Eliminate over humidification and/or dehumidification. Humidification is very energy intensive, requiring the addition or removal of large amounts of heat and often degrading the efficiency of the entire cooling plant when in operation. Strategies: Ensure Proper Economizer Lockout. During periods of low absolute humidity (often measured as dewpoint), minimize the quantity of air brought into the Datacenter to avoid high humidification loading. Alternatively, humidify the air using heat recovered from the return air stream.
>Maintain Coil Temperature Above 55F. Coil temperatures below 55F can lead to localized areas of uncontrolled dehumidification on the coil, which can cause unnecessary energy use and decrease humidity control stability. A chilled water system with a chilled water setpoint of 50F or higher can greatly reducing uncontrolled dehumidification and the resultant humidification load.
>Centralize Humidity Control. Each datacenter space should have a single, centralized humidity control system. Multiple systems frequently end up 'fighting,' that is one unit will be humidifying a room while another unit is simultaneously dehumidifying the same room. A central humidity control system can completely eliminate this inefficient fighting and provide better control by eliminating or allowing rapid diagnosis of the causes of fighting (incorrect setpoints, failed sensors, etc.).
Objective 3: Use efficient humidification technology. When significant quantities of humidification are expected, investment in a high efficiency system is often justified by expected energy costs. Strategies: Use Waste Return Air Heat to Humidify. During periods of low outdoor air temperature, the warm return airstream can be humidified using an efficient adiabatic technology and mixed with cold, dry outdoor air to provide very low energy cost cool air at an appropriate humidity.
>
Use Adiabatic Humidifiers for Humidity and Evaporative Cooling. Adiabatic humidifiers absorb heat from the air stream, providing both humidity and cooling with minimal energy input.
>Use Lower Power Humidification Technology. There are several options for lower power, non-isothermal humidification, including air or water pressure based 'fog' systems, air washers, and ultrasonic systems.
Goal: The air handler fan is typically the second largest energy use in the mechanical system, and can even exceed the energy use of the cooling plant in some cases. Optimizing the air handler system for datacenter use, as opposed to relying on traditional air handler design rules developed over years of office system design, is essential to achieve an efficient and cost effective system.
Objective 1: Minimize fan power requirements. Fan energy is a major operation cost that can often be reduced through design and control. Strategies: Low Pressure Drop System Design. Use low pressure drop air handlers and ductwork. A face velocity of 250-300 fpm is appropriate for datacenters, which operate 8760 hours a year with continuous loading. Underfloor plenums should be sized to provide space for low pressure drop airflow after accounting for an appropriate level of blockage from utilities, conduits, and other underfloor infrastructure.
>Use Redundant Air Handler Capacity in Normal Operation. With the use of Variable Speed Drives and chilled water based air handlers, it is most efficient to maximize the number air handlers operating in parallel at any given time. Power usage drops approximately with the square of the velocity, so operating two units at 50% capacity uses a sum total less energy than a single unit at full capacity.
>Metric: Fan power Watts / CFM under typical conditions. This metric accounts for both the pressure drop of the entire system and the efficiency of the fans used. Typical condition should include operation of the redundant unit(s) where applicable. A baseline system operates at about 1.08 W/CFM, while a good system can achieve 0.30 W/CFM (corresponding to a 1.5 in. w.g. total system pressure drop).
>Metric: Economizer Presence and Lockout Band. An airside economizer is a significant energy saver. Assuming proper design of the humidification control and return air exhaust, a wider band between lockouts indicates greater savings potential.
Objective 2: Use an optimized airside economizer. The typical datacenter load profile is ideally suited for cooling with outdoor air during much of the year, particularly at night, in most climates. Strategies: Implement an Airside Economizer. Datacenters in most climates can significantly benefit from an airside economizer. Datacenters can often be served by using outdoor air during cooler weather and particularly at night when, unlike most office buildings, datacenters still require significant cooling.
>Design For Medium Temperature Air. Size ducting and layout the datacenter room(s) to allow for cooling with the use of air at 60F or higher temperatures. Set the datacenter temperature setpoint near the top of the ASHRAE TC 9.9 Thermal Guidelines for Data Processing Environments recommended temperature range for the applicable datacenter class (the most stringent datacenter class has an allowable high temperature of 95F and recommended of 77F).
> Control to Avoid Unnecessary Humidity Loads. Humidification and dehumidification are very energy intensive. Outdoor air moisture content should be continuously monitored, typically as a drybulb-temperature independent measurement such as dewpoint or absolute humidity. Control to ensure that economization is not resulting in humidity loads in excess of the cooling load reduction it provides. Alternatively, utilize an efficient humidification technology, such as adiabatic humidification driven by waste heat from the return air stream or other source.
Objective 3: Use large centralized air handlers. Centralized air handlers offer efficiency improvements from larger equipment while accommodating a number of controls and configuration efficiency opportunities. Strategies: Use Load Diversity to Minimize Fan Power. Use a central air handler system can save energy by running the entire system at a lower pressure drop when areas of the datacenter are loaded below the design assumptions. A distributed system can only realize fan savings on the one or two units serving the lightly loaded area, assuming the smaller units are equipped with Variable Speed Fans.
>Optimize Air Handler for Fan Efficiency and Low Pressure Drop. Centralized air handlers are commonly located outside of the datacenter space, in mechanical rooms or on the roof. Since expensive datacenter floor area is not consumed by the units footprint, it is economical to make them moderately larger to reduce the velocity and drop the fan power requirements. Typically, they also allow cleaner entry into and exit from the fan, reducing system effects and allowing the fan to operate more efficiently.
>Configure Redundancy to Reduce Fan Power Use in Normal Operation. When multiple small distributed units are used, redundancy must be equally distributed. Achieving N+1 redundancy can require the addition of a large number of extra units, or the oversizing of all units. A central air handler system can achieve N+1 redundancy with the addition of a single unit. The redundant capacity can be operated at all times to provide a lower air handler velocity and an overall fan power reduction, since fan power drops with the square of the velocity. Light loading.
>Use Premium Efficiency Motors and Fans. Larger motor sizes are more efficient. Assuming NEMA premium efficiency motors, a 30 HP fan will be 2% more efficient than a smaller 10 HP fan. Larger, lower rpm fans can also be selected for a higher efficiency than most smaller fans.
> Control Volume by Variable Speed Drive on Fans Based on Space Temperature. The central air handlers should use variable fan speed control to minimize the volume of air supplied to the space. The fan speed should be varied in series with the supply air temperature in a manner that reduces fan speed to the minimum speed possible before increase supply air temperature above a reasonable setpoint. Typically, supply air of 60F is appropriate to provide the sensible cooling required by datacenters.
1.Mechanical: Airflow Management — The efficiency and effectiveness of a datacenter conditioning system is heavily influenced by the path, temperature and quantity of cooling air delivered to the IT equipment and waste hot air removed from the equipment.
uantity of cooling air delivered to the IT equipment and waste hot air removed from the equipment.
Objective 1: Eliminate mixing and recirculation of hot equipment exhaust air. Efficiency is improved by removing waste hot air at the highest possible temperature. Strategies: There are three strategies for obtaining this goal: Hot Aisle/Cold Aisle: Arrange the IT equipment so that all heat is exhaust into hot aisles, and all air intakes draw from cool aisles. Cool air is supplied only into the cold aisle, with return air being drawn directly from the hot aisle. [Chart].
>Rigid Enclosures: Build rigid enclosures to fully separate the heat rejected from the rear of IT equipment from the cool air intakes on the front.
>Flexible Strip Curtains: Arrange IT equipment racks to form hot aisles and cold aisles. Use flexible strip curtains to improve the separation by blocking open space above the racks.
>Blank Unused Rack Positions. Standard IT equipment racks exhaust hot air out the back and draw cooling air in the front. Openings that form holes through the rack should be blocked in some manner to prevent hot air from being pulled forward and recirculated back into the IT equipment.
>Design for IT Airflow Configuration. Some IT equipment does not have a front-to-back cooling airflow configuration. Configure racks to ensure that equipment with side-to-side, top-discharge, or other airflow configurations reject heat away from other equipment air intakes.
>Select Racks with Good Internal Airflow. Select equipment racks that do not have an internal structure configuration that would obstruct smooth cooling airflow through the installed IT equipment.
> Metric: Return air temperature. Higher is better. Higher return temperatures allow for greater savings from economization and lower fan volume requirements; the higher the Delta T between supply and return, the greater the reduction in fan power possible.
Objective 2: Maximize return air temperature by supplying air directly to the loads. Cooling air should be supplied directly to the IT equipment air intake location; unlike with office spaces, the average room condition is not the critical parameter. Strategies: There are two strategies for obtaining this goal: Use Appropriate Diffusers: Standard office style diffusers, designed to create a fully mixed environment and avoid creating drafts, are inappropriate for datacenters. Diffusers should be selected that deliver air directly to the IT equipment, without regard for drafts or throw concerns that dominate the design of most office-based diffusers.
>Position Supply and Returns to Minimize Mixing and Short Circuiting. Diffusers should be located to deliver air directly to the IT equipment. At a minimum, diffusers should not be placed such that they direct air at rack or equipment heat exhausts, but rather direct air only towards where IT equipment draws in cooling air. Supplies and floor tiles should be located only where there is load to prevent short circuiting of cooling air directly to the returns; in particular, do not place perforated floor supply tiles near computer room air conditioning units using the as a return air path.
>Minimize Air Leaks in Raised Floor Systems. In systems that utilize a raised floor as a supply plenum, minimize air leaks through cable accesses in hot aisles, where supply air is essentially wasted. Also implement through policy or design control of supply tile placement to ensure that supply tiles are not placed in areas without appropriate load and/or near the return of the cooling system, where cooling air would short-circuit and, again, be wasted.
>Optimize Location of Computer Room Air Conditioners. In large datacenters, a Computational Fluid Dynamics model may be practical to determine the best location for cooling units. Simple steps should also be considered, such as minimizing the distance between Computer Room Air Conditioner units and the largest loads to reduce the opportunities for leakage from underfloor supply plenums or overhead supply ducting.
>Provide Adequately Sized Return Plenum or Ceiling Height. Overhead return plenums need to be sized to allow for the large quantities of air flow that is required. Common obstructions such as piping, cabling trays, or electrical conduits need to be accounted for when calculating the plenum space required. Blockages can cause high pressure drops and uneven flow. Often the uneven flow cannot be rectified by balancing and uneven return results in short circuiting of cooling air and cold spots near the return fan.
>Provide Adequately Sized Supply. Underfloor supply plenums need to be sized to allow for the large quantities of air flow that is required. Common obstructions such as piping, cabling trays, or electrical conduits need to be accounted for when calculating the plenum space required. Blockages can cause high pressure drops and uneven flow, resulting in cold spots in areas where cooling air is shortcircuiting to the return path.
>Use an Appropriate Pressure in Underfloor Supply Plenums. Too high a pressure will result in both higher fan costs and greater leakage and short circuiting of cooling air. Too low a pressure can result in hot spots at the areas most distant from the cooling supply air point and result in poor efficiency 'fixes' such as a lowering of the supply air temperature or overcooling the full space just to address the hot spots.
What is the data center ?
In a Simple phenomenon Data Center is Place where your data lives irrespective of Hardware like Servers,storage etc.
Datacentres Designing Best Practices are followed in these categories
1.Mechanical: Airflow Management
2.Mechanical: Air Handler Systems
3.Mechanical: Humidification
4.Mechanical: Plant Optimization
5.IT Equipment: Selection
6.Electrical Infrastructure
7.Lighting
8.Commissioning and Retro commissioning
Topic Continues ..!