ZONES: scheduling, TS scheduling and projects and tasks, FSS scheduling without zones, FSS scheduling and FSS within zones

ZONES: scheduling, TS scheduling and projects and tasks, FSS scheduling without zones, FSS scheduling and FSS within zones.

1. explanation scheduling

-draw the circle, ts_tq, ts_quantum, ts_slpret

show dispadmin -l ->to see the current schedulers and their place within the scheduling circle.

# show dispadmin -c TS -g |more

-g = get

-explain the various schedulers, user priority PRI and tsuprlim

2. show the schedulers and change the User Priority (needed to calculate PRI) with TSS

# priocntl -l

show how to get the scheduling class for a PID with

# ps -e -o user,pid,comm,class,pri | more

show the current scheduling parameters for a PID with:

$ /bin/cpuhog.pl &

$ priocntl -d -i pid <pid>

PBIND ALL THE CPUHOG PROCESSES TO THE SAME CORE / CPU IN A MUTICORE / PROCESSOR ENVIRONMENT.

$ psrinfo

Show how the processes if scheduled amongst the various cores / cpu’s.

$ mpstat 3

$ /usr/sbin/pbind -b 1 <pid>

Start a process with a lower USER priority:

$ priocntl -e -c TS -p -15 /bin/cpuhog.pl &

[2] 1563

-p = average Time Slice User PRIority= TSURPRI

-m = Time Slice User PRIority LIMit = TSUPRILIM will always be >= -p

If you set -m < -p then -m will overwrite -p.

Use prstat to see the result.

LOWER the parameters of the fastest process as end – user:

$ priocntl -s -c TS -m -12 -p -20 -i pid 2563

$ priocntl -d -i pid 1563

Use prstat to see the result.

Raise the slowest running process’s user priority as root:

# priocntl -s -c TS -m 15 -i pid 1563

# priocntl -d -i pid 2563

Raise TSUPRILIM to the max as root:

# priocntl -s -c TS -m 60 -i pid 1563

Kill the perl – scripts:

# pkill -9 cpuhog.pl

3. Introduce projects and tasks

Create user “guest”:

# mkdir -p /export/home

# useradd -d /export/home/guest -m -s /bin/bash guest

# passwd guest

# vi /etc/project

lowproject:11:testing low priority projects:guest::project.cpu-shares=(privileged,1,none)

medproject:12:testing low priority projects:guest::project.cpu-shares=(privileged,5,none)

highproject:13:testing low priority projects:guest::project.cpu-shares=(privileged,10,none)

Exit as user ‘guest’ and become user guest again with ‘su – guest’. Find out the projects available for user guest:

$ ip -p → shows the default project

$ projects → shows all available projects

And start some processes in the right project as user “guest”:

$ newtask -p lowproject /bin/cpuhog.pl &

[1] 5061 &

$ newtask -p medproject /bin/cpuhog.pl &

$ newtask -p highproject /bin/cpuhog.pl &

NOW TAKE THE TIME TO PBIND THE PROCESSES TO THE SAME CORE IF NECESSARY.

$ /usr/sbin/pbind -b 3 <pid>

And check their parameters:

# priocntl -d -i pid 5061

TIME SHARING PROCESSES:

PID TSUPRILIM TSUPRI

5063 0 0

# ps -o user,pid,args,class,project,projid,taskid,pri -p 5061

USER PID COMMAND CLS PROJECT PROJID TASKID PRI

root 5061 /usr/bin/perl /bin/cpuhog.pl TS lowproject 11 163 40

While running prstat, as root change the scheduling for lowproject:

# priocntl -s -c TS -p 15 -m 22 -i projid 11

And check:

# priocntl -d -i pid 5061

TIME SHARING PROCESSES:

PID TSUPRILIM TSUPRI

5061 22 15

And the scheduling for project “lowproject”:

# priocntl -d -i projid 11

TIME SHARING PROCESSES:

PID TSUPRILIM TSUPRI

5061 22 15

This process “5061” should now be scheduled faster on the SAME CORE than the others since it is a task within the project called “lowproject”.

Notice however that this type of scheduling is hard to do on a larger system with multiple applications since it is deficult to predict the outcome of scheduling with TS.

The solution: FSS

Kill all existing perl – scripts.

$ pkill -9 cpuhog.pl

4. Introduce FSS

Change the default scheduler to FSS as root:

# dispadmin -d FSS —>> show /etc/dispadmin.conf OR move the scheduling class of running processes to FSS:

# priocntl -s -c FSS -i all or -i pid 1 –>> here “1”=pid of init for example.

The above command move all scheduled proc’s to the FSS class if they were in TS / IA before.

NOTE: default share=1

To check:

# ps -e -o user,pid,args,class,project,projid,taskid,pri

Start the user’s processes:

# su – guest

Check the default

$ id -p

Check that all proc’s are running under FSS.

$ ps -e -o user,pid,comm,class,project,projid,taskid,pri

and start a few new processes:

As root first raise the default shares (=1) on the GLOBAL ZONE:

# prctl -n zone.cpu-shares -v 20 -r -i zone global

$ newtask -p lowproject /bin/cpuhog.pl &

$ newtask -p medproject /bin/cpuhog.pl &

$ newtask -p highproject /bin/cpuhog.pl &

Take the time to bind all processess to the same core / cpu.

$ /usr/sbin/pbind -b 3 <pid>

Check:

To see the scheduling parameters:

# prctl -i pid <pid> | more

[…]

project.cpu-shares

privileged 5 – none –

[…]

Or:

# prctl -i task <taskid>

Calculate the CPU load based upon the Share distribution:

CPU%=((shares / total shares)*100%)/<number of cores or cpu’s>

Compare the outcome with prstat.

As root, change the User Priority for individual processes or projects::

# prctl -n project.cpu-shares -v 10 -r -i project lowproject OR -i pid <pid>

Check the new settings:

# prctl -i project lowproject

[…]

project.cpu-shares

privileged 10 – none –

[…]

See the influence and calculate the outcome.

Afterwards kill all perl – scripts.

$ pkill -9 cpuhog.pl

5. Perform these actions on Zones

Check the current setting of a zone:

# prctl -n zone.cpu-shares -i zone global –>> should show “1”

If using a WHOLE root – zone, as root copy the cpuhog.pl and memhog.pl scripts to all zones:

# cp -p /bin/memhog.pl /bin/cpuhog.pl /export/zones/zone1/root/usr/bin/

Start the cpuhog.pl script in a couple of zones.

To see the number of shares configured use:

GLOBAL # prctl -n zone.cpu-shares -P -i zone zone1

Bind all cpuhog.pl processess to the same core from the Global zone:

GLOBAL # pbind -b 3 12471

After a while, the load per zone should be the same on the core / cpu being used.

Supply different shares to a zone, the default for the GLOBAL zone is ”1”.

Set more shares on a none – global zone:

GLOBAL # prctl -n zone.cpu-shares -v 10 -r -i zone zone1

GLOBAL # prctl -n zone.cpu-shares -v 5 -r -i zone zone2

GLOBAL # prctl -n zone.cpu-shares -v 1 -r -i zone zone3

Check the distribution of the CPU resources per zone with prstat -Z.

# prctl -n zone.cpu-shares -i zone zone1

Now change the settings:

# prctl -n zone.cpu-shares -r -v 10 zone3 etc.

This entry was posted in Solaris / linux, Technical. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *