Commit e10f54bf authored by Jürgen Walter's avatar Jürgen Walter
Browse files

initial userguide version

File added
%Draft Watermark
% For code, Queries, and OCL constraints %
\vskip 1em
\lstset{language=OCL,basicstyle=\ttfamily\small,float=ht,captionpos=b,keywordstyle=\bfseries, numbers=none, frame=tb}
\vskip 1em
\lstset{basicstyle=\ttfamily\small, keywordstyle=\bfseries, numbers=none, frame=tb, captionpos=b,
% Acronyms and Glossary %
\renewcommand*{\glstextformat}[1]{\textcolor{black}{#1}} % Use black as gls link color
\renewcommand*{\acronymname}{List of Acronyms and Abbreviations}
\newcommand{\forget}[1]{} % text intentionally left out for whatever reason
\newcommand{\sandbox}[1]{} % SANDBOX: internal multi-line comments by the author(s): reminders, notes, todos, detailed text blocks from related papers
\newcommand{\longer}[1]{} % commented out text that is candidate for integrating in extended version of the paper / proposal
\newcommand{\shorten}[1]{} % text currently shortened, but immediate candidate to put back in later if space permits ("ungerne gekuerzt"), normally it should be possible to put the text back in by simply removing the tag without further changes needed
\newcommand{\Shorten}[1]{#1} % text currently included, but immediate candidate to shorten if there is not enough space in the end, normally it should be possible to shorten the text by simply removing the tag without further changes needed
%\newcommand{\SHORTEN}[1]{{\textcolor{gray}{#1}}} % for highlighting text that could later be considered for shortening, the idea is that during the writing phase candidate texts for shorting are initially marked with "SHORTEN" and later are changed to "shorten" or "Shorten" once it is clear how much space is available
\newcommand{\todo}[1]{\textbf{\textsc{\textcolor{blue}{(TODO: #1)}}}}
%\newcommand{\note}[1]{\textbf{\textsc{\textcolor{green}{(NOTE: #1)}}}}
%\newcommand{\NOTE}[1]{\textbf{\textsc{\textcolor{red}{(IMPORTANT: #1)}}}}
%\newcommand{\revise}[1]{{\textcolor{red}{#1}}} % For marking text that needs significant revision to make it more clear / avoid confusion
%\newcommand{\refine}[1]{{\textcolor{orange}{#1}}} % For marking text where formulation should be refined/rephrased to make it more clear / avoid confusion
%\newcommand{\polish}[1]{{\textcolor{yellow}{#1}}} % For marking text where formulation should be polished to improve text flow and possibly resolve grammatical errors
%\newcommand{\comment}[3]{{\textcolor{blue}{\textbf{[#1:#2]} \{#3\} }}} % For including arbitrary comments concerning a marked text: \comment{SK}{grammar}{marked text}
\newcommand{\pre}[1]{\texttt{#1}} % fuer typewriter Formatierung
\newcommand{\DSM}{\textcolor{black}{configuration space sub-model}}
\newcommand{\Amodel}{system architecture QoS model}
\newcommand{\AAmodel}{System architecture QoS model} % for capital A at the beginning of sentences.
%### FORMATTING ###%
\newcommand{\code}[1]{\texttt{\hyphenchar\font45\relax #1}}
\newenvironment{enum}% %kompakte Aufzaehlung
\newenvironment{itemi}% %kompakte Aufzaehlung
% This command justifies paragraphs which contain monospace font. More important, words printed
% in monospace font are hyphenated.
\fontdimen2\font=0.4em% interword space
\fontdimen3\font=0.2em% interword stretch
\fontdimen4\font=0.1em% interword shrink
\fontdimen7\font=0.1em% extra space
\hyphenchar\font=`\-% allowing hyphenation
% Document ------------------------------------------------------------------
%\date{} % remove date
%\pagestyle{empty} % remove page number
\title{The Descartes Modeling Language}
\author{Samuel Kounev, Fabian Brosig, Nikolaus Huber\\
Descartes Research Group\\
Chair of Computer Science II, Software Engineering\\
University of W{\"u}rzburg, Germany\\}
\todo{OK: FB: Application Scenarios aus Fabians Diss eingefuegt}
\todo{OK: The current description of DMM is too interweaved with PCM. In the next version, we should consider making the presentation self-contained. One should not be required to understand PCM in order to get started with DMM. Also all aspects including overlapping areas should be explained here in a self-contained fashion without referring to PCM for any missing bits. -> FB: Hab die betreffenden Stellen in Kap. 4 durch Formulierungen aus der Diss ersetzt, dadurch deutlich besser und weniger PCM-Overhead.}
\todo{OK: The Introduction and brief overview of DMM introduces the sub-meta-models starting with the resource landscape, application architecture, etc. We should adjust the structure of this technical report accordingly. -> FB: Hab die Reihenfolge in der Intro getauscht, jetzt: AppArch, ResLandscape, Deployment, UsageProfileModel, AdaptationPoints Model, Adaptation Process Model}
\todo{OK: FB: UsageProfile Model einfuegen}
\todo{OK: FB: Formatierungen pre instance model einheitlich in chap. 2}
\todo{OK: FB: Discuss with Niko: Rename 4.2.3 Dynamic Virtualized Resource Landscapes to Resource Landscapes}
\todo{OK: FB: fyi: 4.1.5 nur ueberflogen}
\todo{OK: FB: Nach Referenzen auf TechReport im CopyPaste Text suchen, e.g., KoBrHu2014-TechReport-DML}
\todo{OK: FB: Discuss with Niko: Vielleicht 5.2.1 als 5.2 hochziehen?}
\todo{OK: FB and NH: search for copy paste whoops such as "thesis"}
\todo{OK: FB: !Discuss with Niko: Conclusion oder sowas in der Art? Siehe Outline 1.5}
\todo{OK: FB/NH: lost bibtex, Section, Figure Referenzen geradeziehen (NH: OK)}
\todo{OK: FB: Modeling Abstractions vs Meta-model vs Model (mindestens in den Uberschriften vereinheitlichen)}
\todo{OK: FB: Nach Todos suchen}
\todo{OK: FB: DMM durch DML ersetzen. Niko: Sollte jetzt passen}
\todo{OK: FB: Bibliography durchschauen nach Duplikaten, ugly entries, etc.}
\todo{OK: FB: Formatierungen pre instance model einheitlich in chap. 2}
\todo{OK: FB: Durchblaettern, nach fiesen Zeilenumbruechen, zu langen Zeilen suchen}
\todo{OK: FB: SVN aufraeumen, e.g., src-gen loeschen, readme Datei in figures/src Folder packen mit Verweis auf unsere beiden Dissertation}
\todo{OK: FB: Self-containedness?? w.r.t. StoEx zum Beispiel? -> StoEx wird auf Heiko verwiesen, ist ok.}
\todo{OK: FB: Outlines, Gluetext auf Gueltigkeit checken}
\todo{OK: FB: Projekt nach DML umbenennen}
% State of the Art & Related Work
% Scenarios
% Part I: The original DML: Application Level and Resource Landscape
% Part II: Dynamic Model and Reconfiguration
%\chapter{Technical Reference}
% Part III: Discussion
% Glossar
% Print the list of acronyms only
% Index
% \addcontentsline{toc}{section}{Index}
% \renewcommand{\indexname}{Index}
% \begin{theindex}
% \input{PCM.idx}
% \end{theindex}
% \printindex
%\section{Hello-World Example}
% A
\newacronym{api}{API}{Application Programming Interface}
\newacronym{awr}{AWR}{Automatic Workload Repository}
\newacronym{aqua}{AQuA}{Automatic Quality Assurance}
% B
% C
\newacronym{crm}{CRM}{Customer Relationship Management}
\newacronym{cbd}{CBD}{Component-based Development}
\newacronym{cbsd}{CBSD}{Component-based Software Development}
\newacronym{cbse}{CBSE}{Component-based Software Engineering}
\newacronym{ccl}{CCL}{Component Composition Language}
\newacronym{ci}{CI}{Continuous Integration}
\newacronym{csm}{CSM}{Core Scenario Model}
\newacronym{csv}{CSV}{Comma-separated Values}
\newacronym{cbml}{CBML}{Component-Based Modeling Language}
\newacronym{cpu}{CPU}{Central Processing Unit}
% D
\newacronym{dbs}{DBS}{Database Server}
\newacronym{dml}{DML}{Descartes Modeling Language}
\newacronym{dns}{DNS}{Domain Name System}
\newacronym{dsl}{DSL}{Domain-specific Language}
\newacronym{dql}{DQL}{Descartes Query Language}
\newacronym{dvfs}{DVFS}{Dynamic Voltage and Frequency Scaling}
% E
\newacronym{ebnf}{EBNF}{Extended Backus-Naur Form}
\newacronym{emf}{EMF}{Eclipse Modeling Framework}
\newacronym{ejb}{EJB}{Enterprise JavaBean}
\newacronym{erp}{ERP}{Enterprise Resource Planning}
% F
% G
% H
\newacronym{hdd}{HDD}{Hard Disk Drive}
% I
\newacronym{ide}{IDE}{Integrated Development Environment}
\newacronym{it}{IT}{Information Technology}
% J
\newacronym{jdbc}{JDBC}{Java Database Connectivity}
\newacronym{jee}{Java~EE}{Java Enterprise Edition}
\newacronym{jpa}{JPA}{Java Persistence API}
\newacronym{jms}{JMS}{Java Message Service}
\newacronym{jsp}{JSP}{Java Server Pages}
\newacronym{jvm}{JVM}{Java Virtual Machine}
% K
\newacronym{klaper}{KLAPER}{Kernel LAnguage for PErformance and Reliability
% L
\newacronym{lqn}{LQN}{Layered Queueing Network}
\newacronym{lsq}{LSQ}{Least Squares}
\newacronym{lad}{LAD}{Least Absolute Differences}
% M
\newacronym{mars}{MARS}{Multivariate Adaptive Regression Splines}
\newacronym{mamba}{MAMBA}{Measurement Architecture for Model-Based Analysis}
\newacronym{mda}{MDA}{Model-driven Architecture}
\newacronym{mdb}{MDB}{Message-Driven Bean}
\newacronym{mdsd}{MDSD}{Model-driven Software Development}
\newacronym{mof}{MOF}{Meta Object Facility}
\newacronym{mql}{MQL}{MAMBA Query Language}
\newacronym{mle}{MLE}{Maximum Likelihood Estimation}
\newacronym{mva}{MVA}{Mean Value Analysis}
% N
\newacronym{nop}{NOP}{No Operation}
% O
\newacronym{ocl}{OCL}{Object Constraint Language}
\newacronym{omg}{OMG}{Objects Management Group}
\newacronym{wls}{WLS}{Oracle WebLogic Server}
% P
\newacronym{pcm}{PCM}{Palladio Component Model}
\newacronym{pdr}{PDR}{Performance Data Repository}
\newacronym{pe}{PE}{Performance Engineering}
\newacronym{pmif}{PMIF}{Performance Model Interchange Format}
\newacronym{pmf}{PMF}{Probability Mass Function}
\newacronym{pn}{PN}{Petri Net}
\newacronym{puma}{PUMA}{Performance by Unified Model Analysis}
\newacronym{pdf}{PDF}{Probability Density Function}
\newacronym{pect}{PECT}{Prediction Enabled Component Technology}
% Q
\newacronym{qn}{QN}{Queueing Network}
\newacronym{qee}{QEE}{Query Execution Engine}
\newacronym{qos}{QoS}{Quality of Service}
\newacronym{qpn}{QPN}{Queueing Petri Net}
\newacronym{qpme}{QPME}{Queueing Petri Net Modeling Environment}
% R
\newacronym{ram}{RAM}{Random Access Memory}
\newacronym{rdbms}{RDBMS}{Relational Database Management System}
\newacronym{rdseff}{RDSEFF}{Resource Demanding Service Effect Specification}
\newacronym{rmi}{RMI}{Remote Method Invocation}
% S
\newacronym{san}{SAN}{Storage Area Network}
\newacronym{scr}{SCR}{Service Component Registry}
\newacronym{se}{SE}{Software Engineering}
\newacronym{seff}{SEFF}{Service Effect Specification}
\newacronym{sequel}{SEQUEL}{Structured English Query Language}
\newacronym{sla}{SLA}{Service Level Agreement}
\newacronym{smm}{SMM}{Structured Metrics Metamodel}
\newacronym{sopeco}{SoPeCo}{Software Performance Cockpit}
\newacronym{soap}{SOAP}{Simple Object Access Protocol}
\newacronym{spa}{SPA}{Stochastic Process Algebra}
\newacronym{spe}{SPE}{Software Performance Engineering}
\newacronym{spec}{SPEC}{Standard Performance Evaluation Corporation}
\newacronym{spt}{UML-SPT}{UML Profile for Schedulability, Performance and Time}
\newacronym{sql}{SQL}{Structured Query Language}
\newacronym{stoex}{StoEx}{Stochastic Expression}
\newacronym{sut}{SuT}{System under Test}
\newacronym{spn}{SPN}{Stochastic Petri Net}
\newacronym{svm}{SVM}{Support Vector Machine}
% T
\newacronym{tco}{TCO}{Total Cost of Ownership}
% U
\newacronym{ui}{UI}{User Interface}
\newacronym{uml}{UML}{Unified Modeling Language}
\newacronym{uri}{URI}{Uniform Resource Identifier}
\newacronym{url}{URL}{Uniform Resource Locator}
\newacronym{uuid}{UUID}{Universally Unique Identifier}
% V
\newacronym{vcpu}{vCPU}{virtual Central Processing Unit}
\newacronym{vm}{VM}{Virtual Machine}
% W
\newacronym{wldf}{WLDF}{WebLogic Diagnostics Framework}
% X
\newacronym{xml}{XML}{Extensible Markup Language}
\newacronym{xsl}{XSL}{Extensible Stylesheet Language}
\newacronym{xslt}{XSLT}{\gls{xsl} Transformation}
\newacronym{xsd}{XSD}{\gls{xml} Schema}
% Y
% Z
This report presented \gls{dml}, a new architecture-level modeling language for modeling Quality-of-Service (QoS) and resource management related aspects of modern dynamic IT systems, infrastructures and services. After providing a brief overview on related work concerning performance modeling and run-time system adaptation in Chapter~\ref{chap:sota}, we introduced an exemplary online prediction scenario for \gls{dml} in Chapter~\ref{chap:scenario}. The modeling abstractions are presented as meta-models in Chapter~\ref{chap:AppLevelAndResLandscape} and Chapter~\ref{chap:SysReconfig}, including illustrative modeling examples.
To conclude this report, we provide a discussion of the differences between \gls{dml} and \gls{pcm}~\cite{becker2008a}, with \gls{pcm} being one of the most advanced architecture-level performance modeling languages in terms of parameterization~\cite{koziolek2009a}.
Afterwards, we provide an outlook on future work.
\section{Differences between DML and PCM}
The differences between the two architecture-level performance modeling languages \gls{dml} and \gls{pcm} stem from their different scopes. While \gls{pcm} is focussed on modeling design-time \gls{qos} properties of component-based software systems, \gls{dml} focusses on the run-time aspects.
As already pointed out in Section~\ref{chap:introduction:sec:designvsruntime}, these two different goals lead to different requirements on the modeling abstractions. In the following, we list concrete modeling aspects where \gls{pcm} and \gls{dml} differ:
\item{\gls{pcm} supports and advocates the explicit specification of dependencies between model parameters, i.e., as explicit mathematical function. While this is valid in design-time scenarios, \gls{dml} supports and advocates the probabilistic characterization of parameter dependencies based on monitoring data that is collected at run-time. In Section~\ref{chap:systemarchitecture:sec:application:sec:probdependencies:sec:rationale}, we explained why this is more practical in run-time scenarios, and showed that explicit specifications often cannot be provided.}
\item{\gls{dml} supports to model parameter characterizations that are dependent on the component assembly, i.e., flexible characterizations for different component instances of the same component type. In \gls{pcm}, parameter characterizations are fixed for the surrounding component type. Differences between component instances are intended to be captured by explicit parameterizations. Thus, in run-time scenarios where representative monitoring data is available, only \gls{dml} offers a convenient approach to make use of such monitoring data for parameter characterization (see Section~\ref{chap:systemarchitecture:sec:application:sec:parameterization}).}
\item{\gls{pcm} supports to model service behavior depending on service input parameters passed upon service invocation. However, as explained in Section~\ref{chap:systemarchitecture:sec:application:sec:probdependencies:sec:rationale}, the behavior of software components is often dependent on parameters that are not available as direct service input parameters. \gls{dml} provides means to \emph{pass} such influencing parameters to the service models whose behavior is influenced (see Section~\ref{chap:systemarchitecture:sec:application:sec:probdependencies:sec:modelingapproach}).}
\item{In contrast to \gls{pcm}, \gls{dml} supports modeling multiple service behavior abstractions of different granularity for the same service. This allows for flexible performance predictions, ranging from quick bounds analysis to detailed model simulations (see Section~\ref{chap:systemarchitecture:sec:application:sec:servicebehavior}).}
\item{\gls{dml} supports the modeling of complex multi-layered resource landscapes. Furthermore, it provides a template modeling mechanism that eases the re-use of resource specifications among several resource containers. This is particularly useful to model virtualization layers, to specify \glspl{vm} that stem from the same \gls{vm} image (see Section~\ref{chap:systemarchitecture:sec:reslandscape}).}
\item{Furthermore, as described in Chapter~\ref{chap:SysReconfig}, \gls{dml} provides means to specify adaptation points as well as adaptation processes. This is out of scope for \gls{pcm}.}
\section{Ongoing and Future Work}
Further details on \gls{dml}, e.g., on model parameterization and model solving, or on the integration of \gls{dml} into an autonomic performance-aware resource management process can be found in the two phd theses \cite{Brosig2014-Dissertation} and \cite{Huber2014-Dissertation}, respectively.
\gls{dml} provides a basis for several areas of future work. In the following overview, we provide several pointers for research extending our work.
\paragraph{Load-Dependent Resource Demands}
In classical performance engineering, resource demands are typically assumed to be load-independent. However, modern processors implement \gls{dvfs} mechanisms that adapt the processor speed depending on the current load. Thus, resource demands may appear to be load-dependent. To further increase the prediction accuracy, this load-dependency should be considered.
Current versions of established model solvers are lacking support for solving performance models with load-dependent resource demands~\cite{BaMaInSi2004-Model_Based_Perf_Prediction}. Hence, in order to support load-dependent resource demands, one should first extend the existing model solvers and then integrate the notion of a load-dependent resource demand in the model abstractions and resource demand estimation approaches.
\paragraph{Event-Based Systems}
The work in \cite{Rathfelder2013-Dissertation} describes how event-based interactions in component-based architectures can be modeled. It furthermore provides a generic approach how the developed modeling abstractions can be integrated into an architecture-level performance model. This approach can be applied to extend \gls{dml} in order to add support for modeling event-based interactions such as point-to-point connections or decoupled publish/subscribe interactions. Platform-specific details about the event processing within the communication middleware are encapsulated.
\paragraph{Integration of Specialized Resource Modeling Approaches}
As part of ongoing research projects, suitable modeling abstractions for network infrastructures~\cite{RyKoZs2013-ThroughputPrediction,RyZsKo2013-DNI-meta-model,RyKo2014-DCPerf-DNI2QPN} and storage systems~\cite{noorshams2013c,noorshams2014b} are under development. Given that these modeling approaches focus on network models respectively storage models, they aim to support: (i)~more accurate performance analysis than what is possible with coarse-grained resource models, and (ii)~further degrees-of-freedom when evaluating fine-granular configuration options of network infrastructures or storage systems. To obtain performance predictions, these specialized performance models require detailed workload profiles as input. Using \gls{dml}, such workload profiles can be derived from the modeled application layer and the corresponding usage profile. These specialized modeling approaches should be integrated in \gls{dml}, on the one hand, to increase the modeling capabilities of \gls{dml}, on the other hand, to simplify the applicability of the specialized models.
\paragraph{\gls{qos} Properties Beyond Performance}
The presented \gls{dml} approach is focused on performance prediction, however, the general modeling approach is not limited to performance. In future work, \gls{dml} could be extended to support the analysis of further \gls{qos} properties. For instance, architecture-based reliability analysis \cite{brosch2011b} could be integrated in \gls{dml} in order to support evaluations of trade-offs between performance and reliability. For example, database transactions failed due to optimistic locking can be retried multiple times. This may increase reliability at the cost of performance.
Other system properties such as power consumption and operating costs are gaining in importance. In particular, adding cost estimates to \gls{dml} would allow multi-criteria optimizations trading-off between performance and costs (cf. \cite{koziolek2013a}).
\paragraph{Explicit Consideration of Adaptation Costs}
During an adaptation process, different adaptation actions might exhibit different costs in terms of execution time or impact on the performance and efficiency of the running system.
For example, a \gls{vm} migration takes more time than adding virtual resources and has a significant impact on the network performance.
On the other hand, the performance gain of \gls{vm} migrating could be higher than adding virtual resources.
Thus, it is of interest to investigate methods to quantify the adaptation cost of different adaptation actions and to extend the modeling abstractions to express such costs explicitly.
Then, the expressed costs can be considered in the adaptation process to trade-off adaptation costs with their achieved impact on system performance and efficiency.
\paragraph{Self-Aware Computing Systems}
The long-term vision of the Descartes Research Project --- the research project behind \gls{dml} --- is to develop new methods for the engineering of self-aware computing systems. The latter are designed with built-in online \gls{qos} prediction and self-adaptation capabilities used to enforce \gls{qos} requirements in a cost- and energy-efficient manner.
For the definition of self-awareness in this context, see Section~\ref{Sec:Self-Awareness}.
\gls{dml} lays the foundation for this vision. In the future, self-aware computing systems should be designed from the ground up with built-in self-reflective, self-predictive, and self-adaptive capabilities. Furthermore, the overall approach should be applied in industrial cooperations to showcase the applicability of our approach and thereby establish the vision of the self-aware computing paradigm.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
%!TEX root=../../DML.tex
The scenario presented previously in Chapter~\ref{chap:scenario} is only a small example of how today's (distributed) data centers look like. They are complex constructs of resources, interacting in different directions. On the vertical direction, resources are abstracted (e.g. by virtualization, JVM, etc.) to share them among the guests. At the same time, resources can be scaled horizontally (e.g. by adding further servers or VMs) or resources can be reassigned (e.g., by migrating virtual machines or services). For an effective system reconfiguration, it is crucial to take this information into consideration. Therefore, models that serve as a basis for reconfiguration decisions must cover diverse aspects and concepts. However, current performance models do usually not provide means to reflect such information. We will now introduce the aspects which we believe are critical for run-time performance management and effective system reconfiguration and which are conceptually presented in \Cref{sec:dynamicview}.
\paragraph{Resource Landscape Architecture}
Today's probably most general distinction of data center infrastructure on the horizontal level is the categorization of resources into computing, storage and network infrastructure. Each of this three types has its specific purpose and performance-relevant property. Each of these types must be taken into consideration when reconfiguring the architecture of the services running in the data center. Because of their differences, each of these resources should be modeled in its own specific way. The concepts of this report will present an approach for modeling the performance-relevant properties of the computing infrastructure; storage and network infrastructure are part of other research projects (cf.~\cite{noorshams2013a,noorshams2013b,noorshams2013c,RyKoZs2013-ThroughputPrediction,RyZsKo2013-DNI-meta-model}).
\caption{Main types of data center resources.}
Another important aspect when thinking about the autonomic reconfiguration of data centers is the physical size of the data center. This has an impact on the scalability of the reconfiguration method, e.g., how many resource managers must be used. Furthermore, for the migration of services or VMs it is important to know the landscape of the data center to decide whether a migration operation is possible or not or to estimate how costly it might be.
\paragraph{Layers of Resources}
A common reappearing pattern in modern distributed IT service infrastructures is the nested containment of system entities, e.g., data centers contain servers, servers typically contain a set of virtual machines (VMs) hosted on a virtualization platform, servers and VMs run an operating system, which may contain a middleware layer, and so on.
This leads to a tree of nested system entities that may change during runtime because of virtual machine migration, hardware or software failures, etc. Because of this flexibility, a large variety of different executing environments can be realized that all consist of similar, reoccurring elements.
Furthermore, the information about how resource containers are stacked (layering) is important for reconfiguration decisions (e.g., to decide whether an entity can be migrated or not).
Another important fact is the influence of each layer on the performance. Various experiments have shown that layering the resources has influence on the performance. Therefore, the different layers must be captured in the models explicitly to predict their impact on the system's performance.
\caption{Different resource layers and their influence on the performance.}
%\todo{Add motivation from CLOSER, make clear why the resource landscape must also be modeled.}
\paragraph{Reuse of Entities}
In general, the infrastructure and the software entities used in data centers are not single and unique entities. For example, a rack usually consists of the same computing infrastructure which is installed several times, virtual machines of the same type are deployed hundreds or thousands times. However, at run-time when the system is reconfigured, the configuration of a virtual machine might change. Then, this virtual machine is still of the same type as before, but with a different configuration.
With the currently available modeling abstractions such as PCM, it is necessary to model each container and its configuration explicitly. This can be very cumbersome, especially when modeling clusters of hundreds of identical machines. The intuitive idea would be to have a meta-model concept like the multiplicity to specify the amount of instances in the model. However, this prohibits to have individual configurations for each instance. The desired concept would support a differentiation between container types and instances of these types. The type would specify the general performance properties relevant for all instances of these types and the instance would store the performance properties of this container instance.
\ No newline at end of file
%!TEX root=../../DML.tex
\chapter{Model-based System Adaptation}
In this chapter, we introduce the parts of the \gls{dml} relevant to i) describe the dynamic aspects of modern IT systems, infrastructures and services and to ii) model autonomic resource management at run-time.
\Cref{chap:SysReconfig:sec:background} explains the background and the motivation for these concepts before \Cref{sec:dynamicview} presents the implementation.
\section{Motivation and Background}
\ No newline at end of file
%!TEX root=../../DML.tex
\section{Adaptation Points Model}
Today's distributed IT systems are increasingly dynamic and offer various degrees of freedom for adapting the system at run-time.
However, to realize model-based system adaptation, these properties must be reflected on the model-level.
In this section, we introduce the adaptation points meta-model as part of the \gls{dml}.
The aim of the adaptation points meta-model is to annotate \Amodel{}s to describe the degrees of freedom of the resource landscape and the application architecture, i.e., the points where the system can be adapted at run-time.
In other words, adaptation points at the model level correspond to adaptation operations executable on the system at run-time.
Other model elements that may change at run-time but cannot be influenced directly (e.g., the usage profile) are not in the focus of this meta-model.