cpubroker man page
SYNOPSIS
The CPU Broker is a reservation-based resource manager for
CPU time. The broker mediates between multiple real-time
applications and the facilities of an underlying real-time
operating system: it implements a configurable CPU manage-
ment policy, monitors resource use, and adjusts alloca-
tions over time. This architecture helps to ensure that
the quality of service (QoS) needs of "important" applica-
tions are satisfied, especially when the relative impor-
tance of applications changes dynamically. An additional
and important benefit is that the broker can automatically
determine many of the scheduling parameters of its client
applications.
This document provides an overview of the use and imple-
mentation of the CPU Broker. The first section below is a
"quick start" example of the Broker in action. The second
section summarizes the implementation of the Broker, and
introduces important terminology. The third section is a
brief tutorial on writing applications that interface with
the CPU Broker.
QUICK START
[Note: This demo requires most of the software dependen-
cies described in the README: TimeSys Linux, ACE/TAO, QuO,
gkrellm v2, etc.]
If you would like to quickly see the Broker in action, you
can run the following commands in separate terminals to
start the Broker, a synthetic CORBA server, and its
client:
[nemo@anemone /tmp] broker_allup_default
Done!
[nemo@anemone /tmp] rktimes -n rt1 -p 8080 -- rt_server -n
rt1
CPU frequency: 851.947Mhz
RT server ready, waiting for connection from rt_client...
[nemo@anemone /tmp] rt_client
You can then monitor the CPU usage and reservation for
rt_server using gkrellm version two (available from
http://www.gkrellm.net).
[nemo@anemone /tmp] gkrellm -s localhost -P 8080
The gkrellm window should display two "CPU" graphs: "CPU
0" is the CPU usage of the rt_server program, and "CPU 1"
is the reservation made by the broker for that applica-
tion. (Note that the labels are below the graphs, not
above.) Furthermore, the "CPU 1" graph shows the unused
portion of the reservation in green and overlays the used
portion in cyan. Both graphs should show square waves
after a while, with the reservation being slower to drop
off. The important thing to note is that the reservation
changes over time to meet the needs of the application.
This is different from an ordinary reservation-based sys-
tem without brokering, in which the application's reserva-
tion would normally remain constant from the beginning to
the end of execution.
Going through each step in detail, the first program, bro-
ker_allup_default, starts a process to house most of the
broker objects and connect the objects in a default con-
figuration. Next we start a program, rktimes, that moni-
tors rt_server and sends updates to gkrellm though TCP
port 8080. The rt_server is a synthetic CORBA server that
will consume 250 or 500 milliseconds of CPU time every
second and reports this usage to the Broker. Finally, we
start rt_client, which connects to rt_server and periodi-
cally does an RPC that causes the server to actually con-
sume CPU time.
A primary feature of the Broker is its ability to manage
multiple applications and arbitrate between them when
there is CPU contention. Continuing with the previous
(and running) example, the following commands will demon-
strate the Broker's ability to deal with contention.
First, we create some (artificial) contention by decreas-
ing the total amount of CPU the Broker will reserve for
applications from 75% to 60%:
[nemo@anemone /tmp] cbhey file://strict_policy.ior set
max-cpu to 0.60
The cbhey command is an extremely flexible tool that can
communicate with different parts of the Broker; we will be
using it extensively in later examples. Next, we add a
new application by starting another rt_server, rt_client,
and gkrellm.
[nemo@anemone /tmp] rktimes -n rt2 -p 8081 -- rt_server -n
rt2 -o rtserver2.ior 200ms
[nemo@anemone /tmp] rt_client -s file://rtserver2.ior
[nemo@anemone /tmp] gkrellm -s localhost -P 8081
This new application consumes 200 milliseconds of CPU time
every period (1 second) and its reservation should also be
around 200 milliseconds per second. The contention cre-
ated by this new application is reflected in the gkrellm
graph for "rt1" as it is receiving a lower reservation
than before. This occurs because both applications have
the same level of importance and the Broker cannot decide
which one to favor, so it arbitrarily chooses "rt2". You
can change this by setting the priority of one of the
applications, referred to as "rt1" and "rt2". For exam-
ple, to bump the priority of "rt1" to one:
[nemo@anemone /tmp] cbhey file://strict_policy.ior set
priority of task rt1 to 1
The graph for "rt1" should return to normal and the graph
for "rt2" should show the Broker automatically dropping
the reservation when the reservation for "rt1" increases.
You might try raising the priority of "rt2" above "rt1" to
see what happens.
IMPLEMENTATION
The CPU Broker is implemented as a CORBA-based server that
receives resource requests from soft real-time applica-
tions and which "translates" those into real reservations.
An application that uses the Broker generates a request at
the end of (some or all of) its task periods, when it
knows of how much CPU it will need in future periods. An
application can be modified so that it communicates with
the Broker; alternatively, an unmodified application can
be managed through the use of a wrapper program, like
proc_advocate(1), albeit at some loss in precision. An
application's request, along with any external data, is
fed into the Broker's adaptation strategies. Adaptation
strategies for individual applications pass their advice
to a global strategy that resolves contention for the CPU.
Finally, when all of the adaptations have been computed,
the applications' CPU reservations are adjusted by the
Broker.
Communication between applications (or their wrappers) and
the CPU Broker is done through remote method invocations
on CORBA objects. At the moment there are three main
classes of Broker objects:
Manager
A Manager is a central object used to track other
objects and route messages between them. Typically
there is only a single manager per machine.
Task/Advocate
A Task object is the glue between the applications,
the Broker, and the real-time operating system's
reservation scheme. Advocates wrap Task objects
and perform application-specific adaptations.
Policy A Policy object makes global decisions about how to
resolve contention for the CPU. These objects take
advice from the Advocates and make the final deci-
sions about how much CPU time every managed process
actually receives.
The standard implementations of these classes and their
support code are contained in several shared libraries:
libcpu-broker
Contains both a straightforward implementation of a
Manager and the Task object, called RKTask, for the
TimeSys Resource Kernel.
libtask_factories/libdelegates
Contains a variety of reusable Advocates.
libpolicies
Contains the StrictPolicy object, which implements
a "strict priority" contention policy that favors
high-priority processes when making reservations.
For example, if all of the CPU has been allocated
and a high-priority process requests more time, the
StrictPolicy will reduce the reservations of lower
priority processes. In contrast, a low-priority
process will have its request lowered (and continu-
ally readjusted) to use whatever CPU time is cur-
rently available.
The libraries contain the bulk of the CPU Broker's code
and are used by the tools listed below:
broker_allup
A "super server" that houses the shared libraries
and the objects that they create.
cbhey A "super client" that can be used to communicate
with the various Broker objects.
proc_advocate
A wrapper that allows unmodified applications to be
managed by the Broker.
rktimes
A generic utility for TimeSys Linux that monitors a
resource set and performs some of the jobs of
rkexec(1).
Learning to wield these tools, especially cbhey(1), is key
to using and configuring the Broker to behave as desired
in a variety of scenarios. Consult the man pages for more
information about these tools, and read the Doxygen gener-
ated documentation in the install directory for more
details on the Broker's classes.
In addition to using the existing tools, effective use of
the CPU Broker may require modifying applications to com-
municate with the Broker, or writing new Broker objects
that implement application-specific adaptation policies.
The remainder of this man page is a brief tutorial about
writing applications that interface with the CPU Broker.
TUTORIAL
This tutorial gives you a detailed example of creating
Broker objects, creating your own Broker classes, and mod-
ifying an application. To fully complete this tutorial,
you will need the following software in addition to the
Broker: TimeSys Linux, ACE/TAO, QuO, OmniORB, python v2,
gkrellm v2, etc...
First, start the broker_allup_default process again, if it
is not already running. This executable is just a shell
script that uses the Broker's tools to construct a default
configuration consisting of a single Manager object that
uses a StrictPolicy. Feel free to peruse the script to
see how it works.
The application we will be using is the rt_server, like in
the QUICK START section. However, this time we will be
providing our own Task object. Creating the object is
simply a matter of asking the shared library referenced by
"cb.ior" and contained in the broker_allup process:
[nemo@anemone /tmp] cbhey file://cb.ior create rktask with
name rt1 > rt1-rkt.ior
This command will construct an RKTask object with the name
"rt1" and print its IOR on standard out. The name is used
to easily identify our application and will be the name of
the RK resource set that holds the application's pro-
cesses. Next, we create an Advocate object that does no
adaptation, but gives us a place to insert other advo-
cates:
[nemo@anemone /tmp] cbhey file://delegates.ior create
real-time-task-delegate > rt1.ior
This new object acts as a delegate that passes all method
calls on to another object. Initially, it has nothing to
delegate to, so you we need to set the "remote-object"
attribute to point at the RKTask we just created.
[nemo@anemone /tmp] cbhey -c IDL:BrokerDelegates/Dele-
gate:1.0 file://rt1.ior set remote-object to
file://rt1-rkt.ior
This command line introduces some more cbhey syntax so
lets take a moment to examine the command in more detail.
The purpose of cbhey is to provide high-level access to
the Broker's objects through a common command line syntax.
This approach makes it easier to avoid hard wiring object
configurations in applications since a user can construct
arbitrary graphs of objects just prior to execution time.
These user defined configurations can then be wrapped in a
shell script for ease of use, like broker_allup_default.
The command line syntax is derived from AppleScript's
"tell" syntax, wherein a script communicates with a par-
ticular "property" inside of a server. In our case, the
server is a CORBA object referenced by its IOR and the
properties are the object's methods and attributes. For
example, in the line above we are setting the "remote-
object" attribute of "./rt1.ior" to an object reference.
The reverse is also possible, we can "get" the "remote-
object" attribute and have the IOR we just set printed to
standard out. The only departure from the English-like
syntax is the "-c" option which is used to perform a
"cast" to another IDL type. This cast is required because
there are cases where cbhey is unable to figure out the
object's type and needs a hint from the user. See the man
page for more information and examples.
With our Advocate and Task object configuration now con-
structed we can start our server, client, and gkrellm.
Notice that we pass a different option, "-t", to the
rt_server. This option informs the server that use an
externally created Advocate instead of creating its own.
[nemo@anemone /tmp] rktimes -n rt1 -p 8080 -- rt_server -t
file://rt1.ior
[nemo@anemone /tmp] rt_client
[nemo@anemone /tmp] gkrellm -s localhost -P 8080
At this point, you may notice that the reservation being
made by the Broker is not the same as in the QUICK START
section. The difference is due to the fact that neither
the RKTask nor the advocate are performing any adapta-
tions. Instead, they are just passing the value requested
by the application. To get the same effect as before we
need to create a MaxDecayTaskAdvocate:
[nemo@anemone /tmp] cbhey file://task_factories.ior create
max-decay-task-advocate > rt1-mdta.ior
This advocate's strategy is to advise the Broker to set
the reservation to the highest value seen in the previous
five periods. This policy works well in general since it
can quickly adapt to spikes in usage without over-provi-
sioning for long periods of time. Just like before, we
need to connect the object before using it:
[nemo@anemone /tmp] cbhey -c IDL:BrokerDelegates/Dele-
gate:1.0 file://rt1-mdta.ior set remote-object to
file://rt1-rkt.ior
We can then insert our new advocate and observe the change
in the reservation (there is no need to stop/start the
application):
[nemo@anemone /tmp] cbhey -c IDL:BrokerDelegates/Dele-
gate:1.0 file://rt1.ior set remote-object to
file://rt1-mdta.ior
Creating your own delegate is a relatively simple process,
you simply overload a single method, adjust one of its
parameters, and call the same method on the wrapped
object. You can make a simple advocate that doubles the
application's original request by saving the following
python code in the file "MyDelegate.py":
import sys
from omniORB import CORBA, PortableServer
from RealTimeTaskDelegateImpl import *
class MyDelegate(RealTimeTaskDelegateImpl):
def ReportCPU(self, rtt, status, advice):
# Call the wrapped object
self.myRemoteObject.ReportCPU(rtt,
status,
advice * 2)
return
pass
orb = CORBA.ORB_init(sys.argv, CORBA.ORB_ID)
poa = orb.resolve_initial_references("RootPOA")
di = MyDelegate()
do = di._this()
file = open(sys.argv[1], "w")
file.write(orb.object_to_string(do))
file.close()
poaManager = poa._get_the_POAManager()
poaManager.activate()
orb.run()
This program creates a subclass of a Broker-provided
class, instantiates an object of this subclass, and prints
its IOR on standard out. You can run it like so:
[nemo@anemone /tmp] /usr/local/bin/python MyDelegate.py
rt1-md.ior
Again, we connect everything and watch the reservation
behavior change:
[nemo@anemone /tmp] cbhey -c IDL:BrokerDelegates/Dele-
gate:1.0 file://rt1-md.ior set remote-object to
file://rt1-rkt.ior
[nemo@anemone /tmp] cbhey -c IDL:BrokerDelegates/Dele-
gate:1.0 file://rt1.ior set remote-object to
file://rt1-md.ior
At this point we will create our own application that is
managed by the Broker, so stop the rt_server and rt_client
we started above. This new application will also be syn-
thetic, like rt_server, but it should give you a good idea
of what is required of an application that needs to commu-
nicate with the Broker. Read and copy the following
python code into the file "brapp.py":
import os
import sys
from resource import *
from signal import *
from omniORB import CORBA, PortableServer
import Broker
orb = CORBA.ORB_init(sys.argv, CORBA.ORB_ID)
obj = orb.string_to_object(sys.argv[1])
# Get a reference to the Manager.
man = obj._narrow(Broker.Manager)
obj = orb.string_to_object(sys.argv[2])
# Get a reference to the Advocate we should use.
adv = obj._narrow(Broker.RealTimeTask)
def noop(sig, fr):
return
signal(SIGALRM, noop)
# Get our current resource usage for comparison
# later on.
last_usage = getrusage(RUSAGE_SELF)
# Setup our scheduling parameters
sp = [
# Period of one second.
Broker.NamedValue("period",
CORBA.Any(CORBA.TC_string,
"1s")),
# The process ID to manage.
Broker.NamedValue("pid",
CORBA.Any(CORBA.TC_long,
os.getpid()))]
# Add ourselves to the manager.
man.AddTask(adv, sp)
try:
alarm(1.0)
while 1:
pause() # Wait for a signal
alarm(1.0) # Setup a periodic signal
# Do something...
print "Hello, World!"
for x in range(0, 1000):
for y in range(0, 50):
pass
pass
# Get our current usage,
usage = getrusage(RUSAGE_SELF)
# ... subtract our previous usage, and
total = (usage[0] + usage[1]) -
(last_usage[0] + last_usage[1])
# ... report it to the broker.
adv.ReportCPU(adv,
# Convert to microseconds.
int(total * 1000000),
int(total * 1000000))
last_usage = usage
pass
pass
finally:
# Always be sure to remove ourselves
# from the manager.
man.RemoveTask(adv)
pass
Start your new application by running:
[nemo@anemone /tmp] /usr/local/bin/python brapp.py `cat
manager.ior` `cat rt1.ior`
If everything worked you should see "Hello, World!" being
printed out every second.
Every application that uses the Broker follows the same
basic steps: (1) construct the scheduling parameters, (2)
add the Advocate to the Manager using AddTask(), (3) peri-
odically call ReportCPU() on the advocate, and (4) remove
the Advocate from the Manager with RemoveTask() when fin-
ished. The first step, constructing the scheduling param-
eters, depends on the timing requirements of your applica-
AddTask().
Once the application is added to the Broker, it can begin
its periodic processing and call ReportCPU(). In the
above example, we figure out the CPU usage of our process
using getrusage(2) and then send that value, in microsec-
onds, to the advocate. Notice that this only gets the
resource usage for the current thread and not the others
in the process. To do this properly, we would have to
call a Broker provided utility function,
rk_resource_set_get_usage(3), that computes the usage for
all of the processes/threads in a resource set. Unfortu-
nately, there is no python binding for this C function, so
we have to compromise and use getrusage.
Finally, when the periodic processing of an application
terminates we have to make sure to remove our advocate
from the manager so other tasks can use the newly freed
resources. In case something goes catastrophically wrong
and the application cannot perform the cleanup, the Broker
should be able to repair itself, at least partially. Any
other inconsistencies left by such an occurrence can be
repaired manually using cbhey and rkdestroyRS.
This concludes the tutorial, to learn more about the com-
mands and functions used, see the man pages listed below.
SEE ALSO
broker_allup(1), broker_allup_default(1), cbhey(1),
proc_advocate(1), rkexec(1), rktimes(1), rt_server(1),
rt_client(1), rk_resource_set_get_by_name(3),
rk_resource_set_get_usage(3), gkrellm(1)
AUTHOR
The Alchemy project at the University of Utah.
NOTES
The Alchemy project can be found on the web at
http://www.cs.utah.edu/flux/alchemy
CPU Broker 1.0.0 2003/12/01 16:41:25 CPUBROKER(7)
Man(1) output converted with
man2html