Site Search

Connect with us!

Click Logo to View Site

2017 Q3: Data Center Beyond Reality - Programming

  • 29 Mar 2017 1:19 PM
    Reply # 4701125 on 4859678

    2017 Q3/Q4

    • Theme/Program: Data Center Beyond Reality
    • Dates/Times: TBD | 5-8pm
    • Preferred Locations:
      • Downtown LA
      • Pasadena
      • Burbank
    • Venues:
      • Data Center (deploying AI / machine learning)
        • ISP: Facebook, Google, Microsoft, Baidu
        • Colo: Equinix, Switch
        • Edge Facilities: 
      • Product Manufacturer

      • Production Studio

      • Learning Center
        • Stanford
        • UC?
    • Food/Spirits:
      • Catered dinner
      • Refreshments (beer/wine & soda/juices
    • Speakers/Moderators:
      • Opening: Event Chair (TBD)

      • Technology Expert
        • Google: Gao
        • NVIDIA: Jensen Huang (CEO) or Ian Buck (VP)
      • Product Manufacturer and/or DC Operator employing AI
        • Platforms:
          • VR: Google Cardboard
          • AR: Magic Leap, Facebook's Oculus, Snapchat's TBD
          • AI: Vigilent (Schneider), Dynamic Smart Cooling (HP), DeepMind (Google), 6SigmaDCX (CFD modeling), nuPSYS
        • Headgear (AR/VR): Google Glass, Microsoft Hololens, Snapchat (TBD by Epiphany Eyewear)
      • Change Agent and/or deployment engineer
        • 6Sigma Modelers
        • DC Operators
        • Production Studio
      • Closing:
        • Chapter President, Doug Wimberly
          • State of the Chapter
          • Upcoming Events
          • Engagement Opportunities
    • A/V
      • Still Photography
        • Guillermo
      • Video

      • Streaming Media
        • Periscope?
        • Twitter?
        • YouTube?
    • Possible Event Sponsors:
      • Google, Facebook, Microsoft, Baidu
      • Equinix, Switch
      • Schneider Electric, HP, nuPSYS
    • Notes:

    • Announcements:
      • Save the Date

    Last modified: 13 Dec 2017 1:48 PM | Peter Gmiter
    Moved reply from State of Programming: 22 Aug 2017 8:15 PM
  • 19 May 2017 6:32 AM
    Reply # 4841943 on 4859678

    2017 Q3 Proposed:

    Last modified: 22 Jun 2017 11:28 PM | Peter Gmiter
    Moved reply from State of Programming: 22 Aug 2017 8:16 PM
  • 19 May 2017 9:15 PM
    Reply # 4843059 on 4859678

    "The Data Center Beyond Reality" ~ Program Research:

    • From: Green Data Center News, "Augmented Reality beats Virtual Reality"
      • "VR can provide a much more exotic and richer immersive environment, but it takes time and energy and money for designers and programmers. Gamers will likely remain the primary audience for VR"
      • "Data centers looking to capitalize on the AR crazy will need to have high-speed network links at the very least. Geographic diversity and proximity to target audiences may also be necessary, depending on the app and overall network speed"
        • Suggestive of EDGE data centers
    • From: Blog FNT Software, "Virtual Reality and Augmented Reality in the Data Center"
      • Definition: Virtual Reality (VR): "used to refer to the presentation and simultaneous perception of reality and its physical properties in an interactive virtual environment that is computer-generated in real time...all elements are computer-generated"
      • Definition: Augmented Reality (AR): "computer-assisted enhancements to how we perceive reality. Often, this involves the visual presentation of information – in other words, supplementing images or videos with computer-generated additional information or virtual objects using fade-in and overlay functions"
      • VR technology available beginning 2016; inexpensive via Google Cardboard
      • Forecast: "market researchers are forecasting growth of more than 100% – from roughly USD 2.5 billion to more than USD 5 billion worldwide – between the years 2015 and 2018...growth totaling more than USD 25 billion for hardware and software by the year 2020"
      • Players:
      • "In this regard, these technologies could be tremendously beneficial in the future, particularly in distributed data center planning and common data center processes"
      • Potential VR technology applications [in data centers]:
        • Visualization and remote management of distributed data centers
        • Spatial data center tours with 3D, real-time imaging to gain a virtual impression of the local circumstances
        • 3D, spatial visual device planning
        • Problems can be virtually visualized and monitored on devices with ease. Appropriate warnings can also be displayed directly on the causing device along with troubleshooting information
        • Consolidation of data that is normally only available in various sources or tools and visualization of this information in virtual space
        • Live data integration in the virtual world, enabling the display of a server’s status or the currently measured temperature and simulation of the climate situation
        • Within the context of installation planning, unoccupied rack units, overloaded racks and mains and electricity ports that are still free can be graphically identified by performing a VR analysis and the corresponding installation space can be selected
        • Signal paths for power and data networks can be monitored and virtually presented in the virtual environment – even for remote data centers or beyond data centers
      • Potential AR technology applications [in data centers]:
        • Location-dependent navigational support within a data center for a specific device – to identify an error-prone device or to install new devices, for example
        • Displaying notifications or alarm and warning messages on a device
        • Coloring devices to help with certain status messages or analyses
        • Showing temperature values or sensor data on certain devices or racks
        • Identifying devices using the integrated QR scanner and showing device-related information
        • Showing key data about the real devices, such as racks, servers or other devices in the data center. The weight, dimensions, consumption values, number, port usage and much more besides can thus be shown on the real device in an instant
        • Showing installation instructions one step at a time within the context of device installation
        • Live communication and work support using cameras while installing new components or moving existing devices. The remote planner can therefore carry out the work steps together with the installation engineer on site
        • Support within the context of the approval process or validation indicating that the installation process has been carried out according to plan. This can be easily implemented using augmented reality provided as live streams or image uploads
      • Demo video [VR and AR], NetWork16 in Leipzig
      • Key field for possible use: Data Center Management
      • Needs: "all the relevant data must be collected and held available in a central data model. In turn, the necessary data for the VR and AR applications can be made available from this central database
    • From Data Center Frontier, "The Virtual Reality Future: Bigger Pipes, More Data Centers"
      • "'It’s going to need a lot of software and a lot of infrastructure,' Scoble (technology evangelist at Microsoft and Rackspace) told an audience of data center professionals at DatacenterDynamics Enterprise"
      • "Virtual reality (VR) allows users to interact with digital environments and objects, typically using a headset. When you turn your head, you see other areas of a 360-degree digital environment. Some VR apps include controllers that allow users to manipulate digital objects"
      • "these 360-degree video applications require a LOT of data"
      • "data will have to move across the network, and in some cases be cached locally to assure low latency"
      • Facebook: a "pioneer in both VR and data centers, has created a new project to boost the world's bandwidth. The Telecom Infra Project seeks to “reimagine infrastructure” to deliver the data-intensive video and virtual reality workloads of the future. Facebook is also working on video compression technologies, and sharing its advances with the industry"
        • "Scaling traditional telecom infrastructure to meet this global data challenge is not moving as fast as people need it to," said Jay Parikh, Global Head of Engineering and Infrastructure at Facebook"
      • "Virtual reality, machine learning and the IoT will require unprecedented levels of computing horsepower, connectivity and data storage"
      • "...virtual reality will also be a transformative technology, and will see meaningful consumer adoption in three to five years"
      • Augmented Reality (AR) headset developers (as of 2016):
        • "Magic Leap is a Florida-based startup backed by Google, Alibaba, Qualcomm and Warner Brothers"
        • "Facebook’s Oculus technology includes the long-awaited Oculus Rift headset...which also is developing VR Touch controllers, and an application environment called Toybox"
        • "SnapChat also has ambitions in the VR headgear market. Last year it acquired the maker of Epiphany Eyewear, which has built-in HD video capabilities"
        • "The integration of VR and augmented reality into eyewear will require a leap ahead in mobile networks"
        • "Facebook says, global digital infrastructure will need a major upgrade. Facebook, Google and Microsoft all have projects to bring wireless connectivity to the developing world, using everything from balloons to drones to satellites"
        • "Facebook and its partners hope to 'reimagine traditional approaches to building and deploying telecom network infrastructure'"
        • "By 2020, more than a zettabyte – that’s 1,000 exabytes – of information will be exchanged over telecom networks, much of it in data-intensive formats like video and virtual reality"
        • "Facebook is also doing advanced work on compression and encoding technologies for VR, and is sharing its work"
        • "The file sizes are so large they can be an impediment to delivering 360 video or VR in a quality manner at scale"
          • "The Facebook team has made refinements to how 360 video is displayed using cube maps. This approach has reduced file sizes by about 20%"
        • "The burden of delivering virtual reality technology will focus on the network. But the file sizes will require storage and data center infrastructure"
        • Equinix: "first data center operator to join the Telecom Infra Project, along with the Open Compute Project’s telco initiative"
        • IO: monitoring VR
        • Google: VR tours possible for The Dalles, Oregon
        • ** "If the technology succeeds, it will require better latency – which could mean more storage in current data center hubs, as well as more edge data centers to distribute content"
        • Per Facebook’s Zuckerberg, "the data center industry will be in building mode for some time to come"
    • From Data Center Knowledge, "Artificial Intelligence: A New Frontier in Data Center Innovation"
      • "Google’s effort is only the latest in a series of initiatives to create an electronic 'data center brain' that can analyze IT infrastructure"
      • "DevOps movement seeks to “automate all the things” in a data center, while the push for greater efficiency has driven the development of smarter cooling systems"
      • "Data center managers love technology, but they don’t totally trust it"
      • "Google has begun using a neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them...machine learning will allow Google to reach new frontiers in efficiency in its data centers, moving beyond what its engineers can see and analyze"
      • "'lights out' data centers, these are typically facilities being managed through remote monitoring, with humans rather than machines making the decisions"
      • "...the endgame is using artificial intelligence to help design better data centers, not to replace the humans running them"
      • Romonet: UK-based maker of data center management tools
        • Product: Prognose (2010): software program that uses machine learning to build predictive models for data center operations
        • Focuses on modeling the total cost of ownership (TCO) of operating the entire data center, rather than a single metric such as PUE (Power Usage Effectiveness)... with 97% accuracy across a year of operations
        • Accurately predict and manage financial risk within their data center or cloud computing environment. Its tools can work from design and engineering documents for a data center to build a simulation of how the facility will operate
        • Working from engineering documents allows Romonet to provide a detailed operational analysis without the need for thermal sensors, airflow monitoring or any agents – which also allows it to analyze a working facility without impacting its operations
        • ** Allowing for testing of ongoing operations without impacting live environments, avoiding risk
      • Vigilent: uses machine learning to provide real-time optimization of cooling within server rooms
        • Vigilent’s AI software collects temperature data from wireless sensors distributed throughout the data hall and dynamically manages the environment to address hot spots from shifting workloads
        • Core AI intelligence, on a server, "learns" over time
          • This begins from the time the system is commissioned and an initial behavior profile is developed during a multi-hour period of perturbation, where responses are provoked and responses measured. It continues throughout the regular use of the system, learning as it simultaneously controls the devices in its network
        • Integrated into Schneider Electric's DCIM suite
        • ** Greatest illustration of improved efficiencies are in cooling, routing from 3D built CFD (computational fluid dynamic) models (for temperatures and airflows)
      • Dynamic Smart Cooling (since 2006): HP's product collected data from a sensor network and used it to automate management of computer room air conditioners (CRACs) and air handlers (CRAHs); revamped in 2009 (after pushback of control abandonment)
      • SuperNap: Las Vegas, NV colocation provider; developed custom cooling units that sit outside the data center and can automatically switch between six different modes of cooling, depending upon the external weather conditions
      • Automated cooling systems that can adjust to temperature and pressure changes in the server environment: Opengate Data Systems, Intel, Brocade and SynapSense
      • ISP Facebook patented use of a load balancer that can redistribute the workload across servers to shift compute activity away from “hot spots” inside racks
      • Hyperscale data centers will lead the way in the use of machine learning to enhance designs for peak efficiency
    • From Data Center Knowledge, "Artificial Intelligence and the Evolution of Data Centers"
      • "2016 research by the Ponemon Institute, the average cost of a single data center outage today is approximately $730,000"
      • "Worldwide, data centers consume approximately 3% of the global electricity supply"
      • Artificial Intelligence (AI): technology that enables machines to execute processes that would otherwise require human intelligence
        • Machines with AI can interpret data to form their own conclusions and make reasonable operating decisions automatically
        • Used to optimize resource management
      • Uses of AI:
        • Long-Term Planning (R&D test dev); useful for environmental considerations
        • Game Theory (predictive analysis); modeling complex scenarios
        • Collective Robot Behaviour; allowing multi-system optimization
      • Market: growth predicted at 62.9% from 2016 to 2022, to $16.06 B
        • HPC DC environments by IBM, Intel, Microsoft
      • DC Uses: in conjunction with data center infrastructure management (DCIM) technologies to analyze power, cooling and capacity planning, as well as the overall health and status of critical backend systems
        • Google's DeepMind: automatically manages power usage in parts of Google's DCs by discovering and reporting inefficiencies across 120 DC variables (including fans, cooling systems, windows)
          • Expected to reduce Google's DC power consumption 15% + 40% on cooling power;
          • Expected reductions to company's carbon footprint
        • Options available to enhance security, improve uptime and reduce costs -- without compromising performance -- in real-time / live environments
    • From Data Center Knowledge, "Google Using Machine Learning to Boost Data Center Efficiency"
      • Using "neural network to analyze the oceans of data it collects about its server farms and to recommend ways to improve them"
      • "The human remains in charge"... "You still need humans to make good judgements about these things"
      • "neural network has been able to predict Google’s Power Usage Effectiveness (PUE) with 99.6% accuracy"
        • Gao sites "19 variables and then designed the neural network, a machine learning system that can analyze large datasets to recognize patterns"
        • Gao Whitepaper (hardware & software combined setpoint optimization)
        • PUE changes: refinements in DC load migrations during power infrastructure upgrades, small changes in water temperatures across components of chiller system
      • Machines can view more sensors simultaneously to process and make decisions faster
      • "Neural networks mimic how the human brain works, allowing computers to adapt and 'learn' tasks without being explicitly programmed for them"
      • Courses in AI taught by colleges such as Stanford
      • "The model is nothing more than series of differential calculus equations...The model begins to learn about the interactions between these variables."
      • Hardware requirements: machine learning doesn’t require unusual computing horsepower... it runs on a single server and could even work on a high-end desktop
      • Expects biggest return on neural networks from design cycle of data centers; testing changes and innovations
    • From Data Center Knowledge, "A Match Made in the Data Center - AI & Robotics"
      • An October 2016 IDC Spending Guide found that cognitive/AI solutions will experience a CAGR of 55.1% over 2016-2020
      • AI can potentially bring a great deal of benefit to the data center, specifically when layered onto robotics, deployed to make physical adjustments to the network
        • Better control of the physical connections within the network
          • Simpler and more dynamic data center network infrastructure
          • Future-proofing of critical infrastructure
          • Decreased operational costs
          • Extend life of physical assets
        • Significantly improved security incident response
          • Decrease in security concerns and increase in reaction time
          • Can eliminate connections to other systems remotely
          • Scale, accuracy, priority
          • Switching of connections within the network will be able to occur based on network settings and real-time traffic
    • From The Whir, "NVIDIA CEO: AI Workloads Will Flood Data Centers"
      • NVIDIA CEO: Jensen Huang
      • NVIDIA VP: Ian Buck is in charge of the company’s Accelerated Computing unit
      • NVIDIA is the top maker of GPUs used in computing systems for Machine Learning
        • GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them
          • Hyper-scalers and cloud providers are excited about GPUs
        • Advances in Machine Learning is driving many of the top strategic trends
          • Growth = data center activity
          • Up until now, the most impactful production applications of Deep Learning have been developed and deployed by a handful of hyper-scale cloud giants – such as Google, Microsoft, Facebook, and Baidu – but NVIDIA sees the technology starting to proliferate beyond the massive cloud data centers
          • Expectations: appearing in Managed Service Providers to Banks
      • Questions:
        • Models: on-prem, cloud, hybrid?
          • ** Most hadn’t yet deployed AI applications into production
            • Cloud services can capture markets for growth
            • Because videos are so data-intensive, companies use on-prem compute clusters to handle, while outsourcing the actual training workloads to cloud GPUs
            • Cloud GPUs are also a good way to start exploring Deep Learning for a company without committing a lot of capital upfront
        • On-prem infrastructure needs:
          • How much will be needed to train Deep Learning algorithms?
          • How much will be needed for inference?
          • Maximum performance or efficiency?
            • Computationally intensive applications used to teach neural networks things like speech and image recognition
              • ** Suggesting up to 30kW / rack
            • Tradeoff is between performance and the number of users, or workloads, the infrastructure can support simultaneously
        • Evidence and reasoning (inference) capabilities at the edge?
          • Inferencing workloads – applications neural networks use to apply what they’ve been trained to do – require fewer GPUs and less power, but they have to perform extremely fast
            • **Proximity (edge) locations are required to maintain low latencies, to increase processing speeds (close to real-time responses)
    • From Quora, "What is DevOps" [David Virtser, DevOps leader & fan]
      • "DevOps is a culture. A healthy culture of organization's Dev and Ops guys to cooperate with each other"
      • "DevOps is talking about many aspects of Development and Operations processes while trying to optimize the engineering organization for growth and infrastructure for scale"
      • "DevOps culture is talking about:
        1. Engineers empowerment - by giving engineers more responsibility over the whole application lifecycle process. (dev -> test -> deploy -> monitor -> be on call).
        2. Test Driven Development - write tests before you write code. Unit tests, integration tests, system tests. This will help increase the quality of your service and give you more confidence to release faster and more frequent.
        3. Automation - automate everything that can be automated. Test automation, infrastructure automation (infrastructure as a code), deployment automation, etc.
        4. Monitoring - monitor your apps, build monitoring alerts well. It should save your time, don't flood with metrics and alerts.
        5. Self service - provide a self service for any framework that you build or anything that you do. Don't be a bottleneck.
        6. People - but most importantly its talking about people culture that should be open minded, transparent, egoless, professional, with "can do" attitude."
    • From Quora, "What is DevOps" [Jonathan Nimrodi, Client Partner at Facebook]
      • "Devops is not just a position, but rather a software development approach which synchronizes development and operations to enable an agility that blurs the distinction between development and operations. Stuart Lange, senior applications developer, nails it when he says that DevOps is 'a technology philosophy that requires communication, collaboration, and above all, a focus on producing a high quality software system that is a joy to develop, operate, and use.'"
    • Products
    Last modified: 30 May 2017 12:41 AM | Peter Gmiter
    Moved reply from State of Programming: 22 Aug 2017 8:19 PM
  • 30 May 2017 7:14 AM
    Message # 4859678

    2017 Q3 Chapter Meeting Programming

    Title: "Data Center Beyond Reality"

    Dates/Times: September 28th | 5-8pm


    • Concept and Applications

    • Terms and Definitions

    • Technologies and Products

    • Transformation and Deployment

    • Uses and Operations

    • Click to PARTICIPATE or nominate someone to present


    Event Sponsors:

    Event Volunteers:

    Last modified: 07 Jun 2017 4:25 PM | Peter Gmiter

Share our Page!

7x24 Exchange
Southern California Chapter


Copyright 2013 7x24 Exchange, Southern California Chapter

Developed with Wild Apricot by Internet Solutions Group, Ltd.

Powered by Wild Apricot Membership Software