Utility Computing
Old Sci-Fi Computing Vision
Isaac Asimov 1956
IT Ressources as Utility
First Approach t this: Virtualization: memory, disk CPUs
Today: Cloud Computing and Containerization
Computational Models: 1970s-1980s
Mainframe Era:
Monolithic
Powerful computers used mainly by large organizations for critical applications
Centralized computing
No communication with outside
Time-sharing
Computational Models: 1990s-2000s
Client-Server Era:
There is more than one computer in the world
Common hardware becomes more powerful
Distributed computing
Web
Computational Models: 2010+
Cloud Era:
Increased network bandwith and reliability
No need to invest large amounts of money in hardware
Large data centers
Virtualized resources
What is DevOps
DevOps is a software development methodology that combines and automates software Development (Dev) and information technology Operations (Ops)
To shorten the systems development life cycle
Delivering features, fixes, and updates frequently
In close alignment with business objectives
What is a tool to achieve DevOps?
Virtualization:
Act of creating a virtual (rather than actual) version of something, including virtual computer hardware platforms, storage devices, and computer network resources
Different forms of virtualization, as hardware, desktop,—-
—> Culmination in new Computing Paradigm: Clould Computing
5 Different Forms of Virtualization
Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system
OS-level virtualization, also known as containerization, refers to an operating system paradigm in which the kernel allows the existence of multiple isolated user-space instances
Desktop virtualization separates the logical desktop from the physical machine (e.g., game streaming Stadia, Remote Desktop)
Application Virtualization (also known as Process Virtualization) is software technology that encapsulates computer programs from the underlying operating system on which it is executed (e.g. sandboxing)
Network virtualization is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity (e.g. overlay virtual network)
Techniques for Virtualization
Hardware virtualization
Virtual machine, that acts like a real computer with operating system
Hypervisor necessary, i.e., computer software, firmware or hardware that creates and runs virtual machines
Type I: bare-metal or native hypervisor that are deployed directly over the host's system hardware without any underlying operating systems or software, e.g., Microsoft Hyper-V hypervisor, VMware ESXi, IBM z/VM, KVM, Xen
e.g., Xen used by Amazon EC2, Rackspace Cloud …
Type II: hosted hypervisor that runs as a software layer within a physical operating system, e.g. Oracle Virtualbox, Parallels Desktop, VMware Player • Some solutions blur these boundaries
Some solutions blur these boundaries
OS-level virtualization
Host OS-kernel allows multiple isolated user-space instances
Such instances, called containers (zones, jails, etc.), may look like real computers from the point of view of programs running in them
A container is a runtime instance of an executable package that includes everything needed to run an application
As code, runtime libraries, environment variables, configuration file
Comparison OS-Level and Hardware virtualization
Container is lightweight (Example Docker)
A virtual machine runs as full blown “guest” OS with virtual access to host resources through a hypervisor
Docker: It´s all about shipping"!
Docker is a platform for Developers and Operators (Sysadmins) to automate the
development,
deployment, and
running of applications
with containers: Tool for DevOps
Containers leverage the low-level mechanics of the host operating system, to provide most of the isolation of virtual machines at a fraction of the computing power
Implements a high-level API to provide lightweight containers that run processes in isolation
Developer: “Build once … (finally) run anywhere”
Operator: “Configure once … run anything”
Image and Container
An Image is an executable package that includes everything needed to run an application
Code, runtime libraries, environment variables, and configuration files
A Container is what the image becomes in memory when executed (that is, an image with state, or a user process)
Containerization is increasingly popular because containers are:
Flexible: Even the most complex applications can be containerized
Lightweight: Containers leverage and share the host kernel
Interchangeable: You can deploy updates and upgrades on-the-fly
Portable: You can build locally, deploy to the cloud, and run anywhere
Scalable: You can increase and automatically distribute container replicas
Stackable: You can stack services vertically and on-the-fly
What are well known commercial providers of docker
Amazon Web Services
Microsoft Azure
Digital Ocean
Google Compute Engine
OpenStack
Rackspace
… (many, many more)
Increased attention since 2013/2014
Top Technologies running on Docker
Docker Adoption Status by Instrastructure Size
Desktop virtualization
Application Virtualization
Network virtualization
Virtualization as Paradigm: Cloud Computing
Starting point "Distributed Systems"
„A distributed System is one in which (hardware or software) components located at networked computers communicate and coordinate their actions only by passing messages“
"The largest supercomputer of the world is the Internet"
Pool of resources: CPUs (work power), Disks (storage), Networks (infrastructures), Knowledge (competence), …
Different computing paradigms in the last few years
Distributed and Parallel Computing (HPC, Supercomputer, …)
Cluster
Web computing, Internet of Services and Things •
Grid Computing
Cloud Computing
Cloud Computing Hype?
The most important significant change in the IT priorities of ESG’s 2011 survey is the increased importance attached to cloud computing services. - ESG Research Thomson Reuters
IT spending on cloud computing (Gartner)
• Estimate for 2023 was: 591 Billions US$
Where we have been using cloud computing for a long time:
facebook
twitter
gmail
hotmail
flicker
youtube
google docs
Definition Cloud Computing (National Institute of Standards and Technology)
“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.”
Five Characteristics of Cloud Computing
Resources Pooling
Resources are shared by multiple tenants
Sense of location independence
Rapid elasticity
Resources can scale Up/Down
Any quantity at any time
On-demand Self-service
Unilaterally (self) provision of computing capabilities as needed
Within mere minutes or seconds
Broad Network Access
Access from anywhere
Standard mechanisms
Measured Service
Pay-as-you-go Service
Metering capability
Software Platform Infrastructure (SPI) Service Models
Infrastructure as a Service (IaaS)
Provision of computer infrastructure as a service
Hardware, as cores, disks, network, OSs
Platform as a Service (PaaS)
Solution stack as a service
Provides computational resources via a platform upon which application and services can be developed and hosted
Software as a Service (SaaS)
Model of software deployment whereby a provider licenses an application to consumer for use as a service on demand
Google Docs, GMail, Salesforce CRM
Microsoft 365, OneDrive, OneNote, Teams, Word, …
Service Model: Cloud Database
A Cloud Database is a database that typically runs on a cloud computing platform, access to it is provided as a service
Virtual machine Image: Users execute their own machine image with a database installed
Database Container
DBaaS
Database as a Service (DBaaS)
Application owners do not have to maintain the database themselves
Database service provider takes responsibility for installing and maintaining the database system and database
Application owners are charged according to their usage of the service
Examples:
MongoDB Atlas: MongoDB as a service
MongoDB Atlas Benefits
Cloud Database: flexible, affordable, and scalable database management.
DBaaS (Database-as-a-Service): fully managed cloud platform; no installation needed
Scalability: As an application grows in size and complexity, MongoDB can grow
24/7 Availability: built-in backup and recovery, ensuring it is 'always on.'
Handling Large Datasets
Flexible Document Schemas
Powerful Querying and Analytics
Multi-cloud Deployment
Cost Efficiency
Security
Automated Backup and Recovery
Data Visualization Tool
Easy Installation and Integration
Data API
Examples: Databases as Amazon Web Services
Examples: Databases as Microsoft Azure Services
Four Deplmoyment Models (Cloud)
Many Influencing IT Aspects on Cloud Computing
Internet
Web 2.0
Fault-Tolerance
Parallel and Distributed Computing
Programming MOdels
Storage Technologies
Virtualization
Summarizing Promises of the Cloud
Clouds for Startup/SME, for Rapid Prototyping
Infrastructure- and platform-services very convenient for Startups or SMEs and rapid prototyping of solutions
Advantage of the Pay-per-use cost model
Allows to roll out business model with limited financial effor
No costly SW and HW necessary
high initial investments often lost, if idea does not fly
Allows SME to focus on core business
Computer center services are „out-sourced“
What about out-sourcing of data?
Often refused due to security, privacy and legal issues
Company and customer data
Cloud Outsourcing Path of companies
Changes for the user/ developer (Cloud)
Motivation for Cloud Computing in the future
When No Cloud
Motivation of Security Engineering
Intrusion:
1975 problem recognized
1987 first models and system of intrusion detection
today 3064 search serults on IEEE
Theft
Malware
1949: Von Neumann starts working on self-replication automate, published 1966
1971 Creeper -considered first virus infection ARPANET computers
-> 1st Anti-virus program: “Reaper” was created to delete Creeper
Back Doors in in Software and IT systems
Motivation – Back Doors in Software and IT Systems: Examples
SolarWinds cyberattack (2020): In 2020, a cyberattack on an unprecedented scale, known as the Sunburst attack, targeted SolarWinds, a major software company based in Tulsa, Oklahoma. The attackers were able to gain access to the company’s systems through a back door in the SolarWinds Orion software, which was then used to compromise the systems of thousands of SolarWinds’ customers, including several U.S. government agencies, for up to 14 months.
NotPetya malware attack (2017): focused on Ukraine, inflicted enormous collateral damage across the globe. It’s estimated that organizations collectively lost $1 billion because of the attack.
Ukraine power grid attack (2015): notable for being the first successful cyberattack on a power grid.
Cyberattacks on Estonia (2007): massively destabilized the Baltic state’s infrastructure and economy, causing nationwide communication breakdowns, banking failures and media blackouts
Catastrophic Damage Resulting from Security Flaws (Gartner)
Spending on information security and risk management products and services is forecast to grow 11.3% to reach more than $188.3 billion in 2023.
Cloud security is the category forecast to have the strongest growth over the next two years (26,8% in 2023)
Reasons for Importance of Security
Increase of data value
Huge data volumes
Strong connection of IT and companies (see IT Governance)
Increase of number of potential attackers
Increasing number of users
Accessible Know-How of security holes
Problem Open Source Software
Decentralization
Increased number of attacks
Lacking law regulations
Low ethical barrier
Lacking control mechanisms
Security controls (security countermeasures)
The CIA triad of Confidentiality, Integrity, and Availability is at the heart of information security.
Confidentiality
Data integrity
Availability
Authentification
Derived Controls
Data authenticity
Non-repudiation
Access control
Accountability
Data Authenticity
Data has not been modified while in transit (data integrity) and receiving party can verify the source of the message (provenance of data)
Typically, combine message authentication codes (MACs), authenticated encryption (AE) or digital signatures.
Does not include non-repudiation
Examples: Digital Signatures, PGP with emails
assert the assignment of an action to a subject, a proof of authenticity of action, which cannot be denied by an authenticated party
Examples
Send (i) and receive (ii) of messages: provide proof that (i) you indeed sent the message, (ii) sent indeed by the claimed sender, and message wasn’t tampered with during transmission.
Superuser/Administrator responsibilities: track actions and tie them back to the specific administrator who performed the actions.
Methods are typically digitally signed acknowledgments
Access control is the selective restriction of access to a place or other resource
Permission to access a resource is called authorization.
Access control builds on correct authentication of user (or programs)
Examples: Access control of an OS for single users, group, or rest of world, ABAC (Attribute-Based Access Control) vs. RBAC (Role-Based Access Control)
Accountability is a service protocolling which user has accessed which resources at what time
Needs working access control and non-repudiation
Transaction logs (date, time, number and duration of a used resource
Basis of commercial use (building block of cloud computing)
Classical Principles for Protected IT Systems
Saltzer and Schröder 1975
Historical starting point ensuring security in Multics operating system (1974), Unix (1974), …
Economy of mechanism: Used security mechanisms and processes must be simple to use that they can be applied automatically and routinely
„Keep the design as simple and small as possible.“
Fail-safe defaults: Base access decisions on permission rather than exclusion
„The default situation is lack of access.“
Complete mediation: It forces a system-wide view of access control and implies that a foolproof method of identifying the source of every request must be devised
„Every access to every object must be checked for authority.“
Open design: The design of a system should not be secret, and all protection mechanisms must be open
„No security through obscurity.”
Separation of privilege: No single accident, deception, or breach of trust is sufficient to compromise the protected IT system
„Two keys“, „Multi-factor …“
Least privilege: Every program and every user of the system should operate using the least set of privileges necessary to complete the job
„Just-Need-to-know security rule“
Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users
„Restrict processes to one user only (if possible)“
Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly
„Mental image of user’s protection goals matches the mechanisms to use“
Security Engineering
Security Engineering comprises all tools, processes and methods for design, implementation and test of protected IT-systems.
Structured engineering approach („Security by Design“)
Goal: Development of a comprising security model
Two examples
IT-Grundschutz
The IT Baseline Protection Catalogs, or IT-Grundschutz-Kataloge, are a collection of documents from the German Federal Office for Security in Information Technology (BSI, Bundesamt für Sicherheit in der Informationstechnik) that provide useful information („Best Practice“) for detecting weaknesses and combating attacks in the information technology (IT) environment (IT cluster), https://www.bsi.bund.de/
OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation)
The OCTAVE method is an approach used to assess an organization's information security needs based on risk analysis.
Developed in cooperation of Carnegie Mellon University, CERT (Computer Emergency Response Team) and support of department of defense of the USA
Technical security concepts
Encryption
Certificate
Digital signatures
PGP method
HTTPS
^
Encryption - time to crack key
Encryption Methods
Asymetric (Public-Key) Encryption
A (Public Key) Certificate is a digital document that maps a public key to the identity of a person
Certificate Authority guarantees the identity of this individual (person)
With creation of certificate a key-pair (private and public key) is instantiated and assigned to the owner of the certificate
Essential components
Serial number
Personal data (name, company)
Public key of person or organization
A signature of the certificate authority by the issuer's private key
A certificate contains no secret information!
Validity duration of Certificate
Invalid after deadline
Possibility of Certificate Revocation
Usually, a certificate follows Standard X.509, Vers. 3
Certificate according to ITU-T X.509v3
Certification Authority
Digital Signarure
Checking a Digital Signature
Hash Algorithms - Examples
PGP Method
HTTPS Method
HTTPS Workflow
The SSL or TLS client sends a "client hello" message that lists cryptographic information such as the SSL or TLS version and, in the client's order of preference, the CipherSuites supported by the client. The message also contains a random byte string that is used in subsequent computations. The protocol allows for the "client hello" to include the data compression methods supported by the client.
The SSL or TLS server responds with a "server hello" message that contains the CipherSuite chosen by the server from the list provided by the client, the session ID, and another random byte string. The server also sends its digital certificate.
The SSL or TLS client verifies the server's digital certificate.
The SSL or TLS client sends the random byte string that enables both the client and the server to compute the secret key to be used for encrypting subsequent message data. The random byte string itself is encrypted with the server's public key.
If the SSL or TLS server sent a "client certificate request", the client sends a random byte string encrypted with the client's private key, together with the client's digital certificate, or a "no digital certificate alert".
The SSL or TLS server verifies the client's certificate.
The SSL or TLS client sends the server a "finished" message, which is encrypted with the secret key, indicating that the client part of the handshake is complete.
The SSL or TLS server sends the client a "finished" message, which is encrypted with the secret key, indicating that the server part of the handshake is complete.
For the duration of the SSL or TLS session, the server and client can now exchange messages that are symmetrically encrypted with the shared secret key.
HTTPS Workflow (2) SSL Scheme
HTTPS Workflow (3) TLS Scheme
Last changed5 days ago