Tuesday, December 13, 2011

Ten Top Traits of Problem-Solvers



     Something we all seem to have in common is problems. Some see problems and give up immediately. Others thrash about or throw money at their problems with the predictable results that they continue without resolution, often getting worse. People carp, duck and hide, pull their hair, cry, or lash out, but their problem is seldom solved in a way that's best for all parties involved.
     People who solve problems seem to have several traits in common:

     #10 - Problem-solvers get a good fix on reality. They do not spend a lot of time in dreamland, wondering about what coulda been or woulda been if things were different. Things are not different -- problem-solvers know this and act accordingly.
     #9 - Problem-solvers do not gripe and do not make trouble for others.
     #8 - Problem-solvers are self-starters. They do not wait for someone else to point out there is something wrong. And they don't wait for someone else to tell them how to fix it.

     #7 - Problem-solvers do not keep lists of grievances. They may keep a few objective examples of a problem to use as evidence when problem-solving discussions arise.

     #6 - Problem-solvers engage their imaginations to come up with new solutions they can try out, and they have the guts to go forward as they test their solutions.

     #5 - Problem-solvers do not look to others for assurances that cannot be delivered. They know who does and does not make decisions, and try to work with those who do.

     #4 - Problem-solvers are nimble-minded, tough-love optimists who work tenaciously to solve the problems facing them.

     #3 - Problem-solvers are capable of allying themselves with others so that if a problem goes beyond their personal abilities, they can make use of the talents of others.

     #2 - Problem-solvers effectively juggle their entire load of problems so all get resolved. They don't let any one problem so dominate their attention (except in emergencies), that they can't multitask. Here they attend to one problem. A few minutes later they're busy solving another. They don't make everything else wait until something is solved completely. They work on multiple fronts as best they can.

     And the number one attribute:

     #1 - Problem-solvers go the extra mile to solve problems and to help others solve their problems. They value win-win solutions whenever they are possible.

     Are you a problem-solver?

Thursday, September 15, 2011

Comindware Task Management Solution

Comindware Task Management™ is powerful yet very easy to use FREE software to manage daily tasks appearing in your organization. The product is pre-integrated as a part of Comindware Tracker™ receiving and managing tasks appearing during your process execution along with your other tasks. Task tracking has never been so easy with user friendly interface and no hassle of managing complex solution for your task management.

Separate 

Separate different parts of your organization with exclusive Workspaces technology and start creating and managing Tasks for different parts of your organization in minutes.

Collaborate 

Collaborate. Organize your team work on Tasks through ongoing communication processes through comments and discussions. Attach and share files with a group you’re working with.


Follow tasks 

Follow Tasks and discussions you’re involved in.

Track execution 

Manage your schedule and track execution with Calendar, deadlines, notifications, and exclusive Task Schedule view.

Integration with outlook 

Stay productive working exclusively through MS Outlook interface you’re familiar with.

 

Sunday, August 7, 2011

Move Linux server to XenServer Host


In order to P2V a Linux server to a XenServer host, you need to reboot the machine you want to convert and boot from the XenServer Installation CD. When you see the Welcome to XenServer screen, select OK and the installer will do some hardware detection, etc. After that, you will get four choices, one of them being
Convert an existing OS on this machine to a VM (P2V)
linux to xen server

And that’s it! Follow the rest of the prompts and the server will be virtualized. After it is complete, you will need to attach a VIF in order to have external network connectivity.

General Guidelines for Virtualizing Physical Servers

When considering how to best begin virtualizing a collection of physical servers, it is best to gain some comfort level and experience with virtualizing servers that are more simply configured, moving later to servers with more complex configurations.

Good candidates typically include servers that are used for test and development environments, and servers used for in-house IT infrastructure (intranet web servers, DNS, NIS, and other network services, etc.). Typically servers that are doing heavily CPU-intensive tasks (sophisticated mathematical modeling, video rendering) or are I/O-intensive (high-traffic commercial web sites, highly-used database servers, streaming audio/video servers) are not the best candidates for virtualization at the start.

Once you have identified some physical servers that seem reasonable to work on first, you should take a close look at how you are currently using them. What applications are they hosting? How I/O intensive are they? How CPU-intensive are they?

To make a reasonable assessment, you should gather a reasonable amount of data on the current physical servers that you are thinking about virtualizing. Look at system monitoring data for disk usage, CPU usage, memory usage, and network traffic, and consider both peak and average values.
Good candidates for virtualization are:
    •    servers whose CPU and memory usage and NIC and disk throughput are low will be more likely to coexist as a VM on a XenServer Host with a few other VMs without unduly constraining its performance.
    •    servers that are a few years old - so their performance as VMs hosted on a newer server would be comparable to their existing state.
    •    servers that do not use any incompatible hardware which cannot be virtualized, such as dongles, serial or parallel ports, or other unsupported PCI cards (serial cards, cryptographic accelerators, etc.).
Once you have identified a set of machines that you want to virtualize, you should plan the process to accomplish the task. First, provision the physical servers that will serve as your XenServer Hosts. The chief constraint on the number of VMs you can run per XenServer Host is system memory.
Next, plan how you will create the VMs. Your choices are to P2V an existing server, install a fresh server from network-mounted vendor media, or install a base operating system using a pre-existing template.
If you P2V an existing server, it's best to P2V a test instance of the server, and run it in parallel with the existing physical server until you are satisfied that everything works properly in the virtual environment before re-purposing the existing physical machine.
Next, plan how to arrange the desired VMs on the XenServer Hosts. Don't "mix up" servers - assign VMs to specific XenServer Hosts, giving consideration to complementary resource consumption (mixing CPU-intensive and I/O-intensive workloads) and complementary peak usage patterns (for instance, assigning overnight batch processing and daytime interactive workloads to the same XenServer Host).
For configuring individual VMs themselves, keep these guidelines in mind:

    •    create single-processor VMs unless you are serving a multi-threaded application that will perform demonstrably better with a second virtual CPU.

    •    when you configure the memory settings for a VM, consult the documentation for the guest operating system you plan to run in that VM and for the applications you plan to run on them.

Tuesday, July 26, 2011

Backup/Restore Mysql database


A simple approach on how to backup and restore MySQL databases.

Backup

To save an existing database it is recommended that you create a dump.
  • To dump all databases you must run the command:



mysqldump --user=****** --password=****** -A > /path/to/file_dump.SQL 
  • To dump several specific databases you must run the command:



mysqldump --user=****** --password=******  db_1 db_2 db_n> /path/to/file_dump.SQL
  • To dump all tables from a database you must run the command:



mysqldump --user=****** --password=****** db > /path/to/file_dump.SQL
  • To dump specific tables from a database you must run the command:



mysqldump --user=****** --password=****** db --tables tab1 tab2 > /path/to/file_dump.SQL



For each of the following commands you must specify a user (user) and password (password) with administrator rights on the database.

Restore your database

To restore a dump just launch the command:

mysql --user=****** --password=****** db_name < /path/to/file_dump.SQL


Note that

A database dump: is a record of the table structure and the data from a database, usually in the form of a list made of SQL statements.

Sunday, July 24, 2011

Change your Hostname without Rebooting in RedHat Linux



Original Article

This tutorial covers changing your hostname in RedHat Linux without having to do a reboot for the changes to take effect. I've tested this on RedHat , Fedora Core , and CentOS . It should work for all the versions in between since they all closely follow the same RedHat configuration.


Make sure you are logged in as root and move to /etc/sysconfig and open the network file in vi.
cd /etc/sysconfig 
vi network



Look for the HOSTNAME line and replace it with the new hostname you want to use. In this example I want to replace localhost with redhat9.

HOSTNAME=redhat9


When you are done, save your changes and exit vi. Next we will edit the /etc/hosts file and set the new hostname.

vi /etc/hosts



In hosts, edit the line that has the old hostname and replace it with your new one.
192.168.1.110  redhat9

Save your changes and exit vi. The changes to /etc/hosts and /etc/sysconfig/network are necessary to make your changes persistent (in the event of an unscheduled reboot).
Now we use the hostname program to change the hostname that is currently set.
hostname redhat9
And run it again without any parameters to see if the hostname changed.
hostname

Finally we will restart the network to apply the changes we made to /etc/hosts and /etc/sysconfig/network.
service network restart

To verify the hostname has been fully changed, logout of your system and you should see your new hostname being used at the login prompt and after you've logged back in.



Quick, painless, and you won't lose your server's uptime.

Tuesday, July 19, 2011

MySQL Change root Password

MySQL Change root Password

How do I change MySQL root password under Linux, FreeBSD, OpenBSD and UNIX like operating system over ssh / telnet session?

Setting up MySQL password is one of the essential tasks. By default root user is MySQL admin account user. Please note that the Linux / UNIX root account for your operating system and MySQL root are different. They are separate and nothing to do with each other. Sometime your may remove mysql root account and setup admin as mysql super user for security purpose.

mysqladmin command to change root password

If you have never set a root password for MySQL server, the server does not require a password at all for connecting as root. To setup root password for first time, use mysqladmin command at shell prompt as follows:
$ mysqladmin -u root password NEWPASSWORD
However, if you want to change (or update) a root password, then you need to use the following command:
$ mysqladmin -u root -p'oldpassword' password newpass
For example, If the old password is abc, you can set the new password to 123456, enter:
$ mysqladmin -u root -p'abc' password '123456'

Change MySQL password for other users

To change a normal user password you need to type (let us assume you would like to change password for user asad) the following command:
$ mysqladmin -u asad -p oldpassword password newpass

Changing MySQL root user password using MySQL sql command

This is another method. MySQL stores username and passwords in user table inside MySQL database. You can directly update password using the following method to update or change password for user vivek:
1) Login to mysql server, type the following command at shell prompt:
$ mysql -u root -p
2) Use mysql database (type command at mysql> prompt):
mysql> use mysql;
3) Change password for user asad, enter:
mysql> update user set password=PASSWORD("NEWPASSWORD") where User='asad';
4) Finally, reload the privileges:
mysql> flush privileges;
mysql> quit
The last method can be used with PHP, Python or Perl scripting mysql API.

Wednesday, July 13, 2011

SVNSync

Using svnsync

Original Article

svnsync is a one way replication system for Subversion. It allows you to create a read-only replica of a repository over any RA layer (including http, https, svn, svn+ssh).
First, lets setup the initial sync. We have two repositories, I will skip the details of svnadmin create. For the remote access to the replica repository, I used svnserve, and I added a user with full write access. The destination repository should be completely empty before starting.


So, to make this easier, I am going to put the repository URIs into enviroment variables:
$ export FROMREPO=svn://svn.example.com/
$ export TOREPO=svn://dest.example.com/
Because the svnsync is allowed to rewrite anything on the TOREPO, we need to make sure the commit hooks are configured to allow our ‘svnsync’ user to do anything it wants.
On the server hosting TOREPO, I ran this:
$ echo "#!/bin/sh" > hooks/pre-revprop-change
$ chmod 755 hooks/pre-revprop-change
Now we are ready to setup the sync:
$ svnsync init ${TOREPO} ${FROMREPO}
This will prompt you for the username, password, and also sets several revision properties on the $TOREPO, for revision zero. It doesn’t actually copy any of the data yet. To list the properties that it created, run:
$ svn proplist --revprop -r 0 ${TOREPO}

  svn:sync-from-uuid
  svn:sync-last-merged-rev
  svn:date
  svn:sync-from-url

$ svn propget svn:sync-from-url --revprop -r 0 ${TOREPO}

  svn://svn.example.com/
So all the knowledge about what we are syncing from is stored at the destination repository. No state about this sync is stored in the source repository.
We are now ready to begin copying data:
$ svnsync --non-interactive sync ${TOREPO}
And if everything is setup correctly, you will start replicating data.
Except, I suck. And the first thing I did was hit control+c. I figured this is a cool replication system, so I just ran the sync command from above again, and got this:
$ svnsync --non-interactive sync ${TOREPO}

Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
Failed to get lock on destination repos, currently held by 'svn.example.com:0e4e0d98-631d-0410-9a00-9320a90920b3'
svnsync: Couldn't get lock on destination repos after 10 attempts
Oh snap. I guess its not so easy to restart after an aborted sync.
I started debugging, and found that svnsync kept its lock state in a special property in revision zero again.
So, To fix this, we can safely just delete this lock:

$ svn propdel svn:sync-lock --revprop -r 0  ${TOREPO}

Now running sync again works! Hurrah!
After the sync finishes, we will want to keep the replica up to date.
I personally set a ‘live’ sync, but it is also possible to use a crontab or other scheduling method to invoke sync whenever you want.
To setup a live sync, on the FROMREPO server, I appended this to my hooks/post-commit file:
svnsync --non-interactive sync svn://dest.example.com/ &
You will want to make sure that the user-running subversion (and the hook script) has a cached copy of the authentication info for the destination repository.
Unfortunately, the post-commit hook won’t catch everything, so we also need to added this to the post-revprop-change hook:
svnsync --non-interactive copy-revprops  svn://dest.example.com/ ${REV} &
This will help propagate things like editing svn:log messages.
And there you go, thats the path I took to mirror one of my repositories onto another machine.

Wednesday, June 15, 2011

Cacti Installation on Centos 5.2

Cacti is a GPL-licensed, scalable, RRDtool-based monitoring program with flexible graphing options. This article describes the process of installing and configuring Cacti on CentOS 5.2.
 
Useful links to this installation were BXtra and TechDB.

Per the Cacti documentation, Cacti requires:
RRDTool 1.0.49 or 1.2.x or greater
MySQL
4.1.x or 5.x or greater

PHP 4.3.6 or greater, 5.x greater highly recommended for advanced features

A Web Server e.g. Apache or IIS
I'd also recommend installing vim, net-snmp, net-snmp-utils, php-snmp, initscripts, perl-rrdtool, and any dependencies.

To perform this install, I am logged into Gnome as a normal user, and opened a terminal that is switched to the root user using the
su command. I had already installed apache, mysql, and PHP during the original install process of CentOS 5.2.

I added a new repository to facilitate this install. To do this, I created a file
(
/etc/yum.repos.d/dag.repo) containing Dag Wiers repository, which contains rrdtool, among other things.
[dag]


name=Dag RPM Repository for Red Hat Enterprise Linux

baseurl=http://apt.sw.be/redhat/el5/en/i386/dag
    gpgcheck=1
 gpgkey=http://dag.wieers.com/rpm/packages/RPM-GPG-KEY.dag.txt enabled=1


You can create this file by typing
vim /etc/yum.repos.d/dag.repo and copying and pasting the above information into the file. Be warned that the above text containing the repository is version and architecture-specific.

I then typed
yum update to update CentOS and the repository list before installing additional software.

I installed everything but cacti through yum. You can verify that you have the packages in question (or the version numbers of installed packages) by attempting to install them, as yum will remind you that you already have the latest version installed, as well as the version status of the packages, like shown here:
# yum install php httpd mysql mysql-server php-mysql vim-enhanced net-snmp net-snmp-utils php-snmp initscripts perl-rrdtool rrdtool initscripts
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
* base: pubmirrors.reflected.net
* updates: mirror.fdcservers.net
* addons: chi-10g-1-mirror.fastsoft.net
* extras: mirror.fdcservers.net
Setting up Install Process
Parsing package install arguments
Package php-5.1.6-23.2.el5_3.i386 already installed and latest version
Package httpd-2.2.3-22.el5.centos.1.i386 already installed and latest version
Package mysql-5.0.45-7.el5.i386 already installed and latest version
Package mysql-server-5.0.45-7.el5.i386 already installed and latest version
Package php-mysql-5.1.6-23.2.el5_3.i386 already installed and latest version
Package 2:vim-enhanced-7.0.109-4.el5_2.4z.i386 already installed and latest version
Package 1:net-snmp-5.3.2.2-5.el5_3.1.i386 already installed and latest version
Package 1:net-snmp-utils-5.3.2.2-5.el5_3.1.i386 already installed and latest version
Package php-snmp-5.1.6-23.2.el5_3.i386 already installed and latest version
Package initscripts-8.45.25-1.el5.centos.i386 already installed and latest version
Package perl-rrdtool-1.3.7-1.el5.rf.i386 already installed and latest version
Package rrdtool-1.3.7-1.el5.rf.i386 already installed and latest version
Package initscripts-8.45.25-1.el5.centos.i386 already installed and latest version
Nothing to do

Download the latest version of Cacti (0.8.7e, as of the writing of this article) from here. I downloaded it to my desktop and unzipped it by right clicking it and selecting "Extract here". I also renamed the cacti-0.8.7e directory by right clicking and selecting "Rename". You could do this in the command line, if you wanted to:
[your root shell] # tar xzvf cacti-0.8.7e.tar.gz
[your root shell] # mv cacti-0.8.7e cacti
Move the entire cacti directory to /var/www/html/ :
[your root shell] # mv cacti /var/www/html
I chose to create a 'cactiuser' user (and cacti group) to run cacti commands and to have ownership of the relavent cacti files. It was here that I noticed that my install did not have any of the /sbin directories in its $PATH , so I simply typed the absolute path:
[your root shell] # /usr/sbin/groupadd cacti
[your root shell] # /usr/sbin/useradd -g cacti cactiuser
[your root shell] # passwd cactiuser
Change the ownership of the /var/www/html/cacti/rra/ and /var/www/html/cacti/log/ directories to the cactiuser we just created:
[your root shell] # cd /var/www/html/cacti
[your root shell] #
chown -R cactiuser rra/ log/

Create a mysql root password, if you haven't already (password in this example is samplepass:
[your root shell] # /usr/bin/mysqladmin -u root password samplepass

Create a MySQL database for cacti:
[your root shell] # mysqladmin --user=root --password=samplepass create cacti

Change directories to the cacti directory, and use the cacti.sql file to create tables for your database:


[your root shell] # cd /var/www/html/cacti
[your root shell- cacti] # mysql --user=root --password=samplepass cacti <
cacti.sql



I also created a MySQL username and password for Cacti:
[your root shell] # mysql --user=root --password=samplepass
Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL connection id is 28
Server version: 5.0.45 Source distribution

Type 'help;' or 'h' for help. Type 'c' to clear the buffer.

mysql> GRANT ALL ON cacti.* TO cactiuser@localhost IDENTIFIED BY 'samplepass';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye

Edit
/var/www/html/cacti/include/config.php with your favorite editor, and update the information to reflect our cacti configuration (you can leave the other text in the file alone):
/* make sure these values refect your actual database/host/user/password */
$database_type = "mysql";
$database_default = "cacti";
$database_hostname = "localhost";
$database_username = "cactiuser";
$database_password = "samplepass";
$database_port = "3306";
Create a cron job that polls for information for Cacti (I'm choosing to use /etc/crontab here):

[your root shell] # vim /etc/crontab


Add this line to your crontab:
*/5 * * * * cactiuser /usr/bin/php /var/www/html/cacti/poller.php > /dev/null 2>&1
Edit your PHP config file at /etc/php.ini to allow more memory usage for Cacti. It is a relatively large text file- using vim, I search for "memory_limit" by typing /memory_limit in command mode.
[your root shell] # vim /etc/php.ini
I changed memory_limit = 8M to memory_limit = 128M
Before I check to see if Cacti works, I want to check and see if mysqld and httpd are running using the service command.
[your root shell] # /sbin/service mysqld status
[your root shell] # /sbin/service httpd status

If
mysqld and httpd are running, great. If not, type:
[your root shell] # /sbin/service mysqld start
[your root shell] #
/sbin/service httpd start
If you're an "I need to see what the output looks like" type, here is an example of the previous command:
[your root shell] # /sbin/service mysqld status
mysqld is stopped
[your root shell] # /sbin/service mysqld start
Initializing MySQL database: Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h localhost.localdomain password 'new-password'
See the manual for more instructions.
You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at http://shop.mysql.com
[ OK ]
Starting MySQL: [ OK ]
You should now be able to access cacti at http://localhost/cacti from the local computer or from any computer within your LAN network at http://your.internal.IP.address/cacti .

There should be a Cacti Installation Guide window that shows up, giving licensing info and the like. Click "Next".

Select "New Installation", since this is a new installation.

The next window to pop up should tell you whether Cacti could find the paths to all of the elements that Cacti needs to run, such as RRDtool, PHP, snmp stuff, etc. If everything but Cacti was installed via yum, you should be good here. Click "Finish" to save the settings and bring up the login window.

Below is a screenshot of the login window. The default user name is
admin. The default password is admin. It should prompt an automatic password change for the admin account when you log in the first time.

If you successfully log in, I'd recommend taking a break here. Depending on how fast you are, your cron job may not have had enough time to run the poller program and create data for your graphs. I'd suggest taking a deep breath, or brewing a cup of tea (or coffee) for yourself.

The localhost machine should have some graph templates that are already created, but you can click the "Create Additional Devices" link to add graphs for any other machines on your network. I added my FreeNAS box (tutorial for that to follow).

After having consumed your beverage of choice, press the "Graphs" button. Cacti should have a graph showing you a couple minutes of data for the machines you have added. The longer your machine is on, the more informational the graphs will be. Also, if you click on a particular graph, Cacti will Congratulations! You're now monitoring!

View the Cacti documentation page for more information on how to take advantages of Cacti.

Below are some graphs that were made using Cacti.

Wednesday, June 8, 2011

How to use Gevey : iphone4

How to use Gevey :
Before using,you should need following two tips :
1.If your iPhone 4 has custom firmware, the GEVEY Sim adapter will not work. You must restore your iPhone 4 to the original iPhone 4 Firmware.
2.If after booting up the iPhone 4, and you do not see the "accept" screen, make sure you have an active sim and that it is correctly inserted in your iPhone.
User Manual :
1.Turn off your iphone and insert your SIM card and GEVEY sim together with the metal SIM tray provided.
2: Turn on your iphone and wait for the SIM welcome menu to show and then select "accept".
3: At the beginning, there is a "no service "message will show on your iphone,don't worry , just wait and later you will see one signal bar appears on the top left corner.
4: After you see the singal bar, please dial "112" and hang up in 2 seconds.
5: Turn on your Airplane mode and then turn off right away,Your phone will show "SIM failure" and "No SIM Card installed"
6: When you see the "No SIM Card installed" info on the iphone , please turn on the airplane mode then and turn it off asap. Later you will see the "SIM Failure" info and the singal will show up as well.
Notice :
1.If there is no singal show up after you follow the steps above,please switch off your iphone and repeat the user guide steps again.
2.If you want to use the 3G network,just turn on "data roaming"in the network menu.
3.You can do a language selection for the welcome message at the STK menu.

Tuesday, May 10, 2011

Install KnowledgeTree in Cent OS

Note that these steps should be performed as root.

1) Install OpenOffice.org 2.4
a) Download the OpenOffice.org RPM from the OpenOffice.org website
http://download.openoffice.org/2.4.3/other.html
Image:Rh_step0.png
b) Untar the OpenOffice.org package
$ tar -xf OpenOffice.org.tar.gz
Image:Rh_step1.png

c) Run the OpenOffice.org RPM file
$ cd /RPMS
$ rpm -i *.rpm 
Image:Rh_step2.png
Image:Rh_step3.png

Or yum install openoffice.org-core.x86_64 

2) Temporarily disable SELinux.
$ su -
$ setenforce permissive
Image:Rh_step5.png

3) Configure the KnowledgeTree repositories
a) Create a new file with the following content and copy it to
/etc/yum.repos.d/KT.repo


[KT]
name=KT
baseurl=http://repos.knowledgetree.com/rpm/rhel/5/$basearch
enabled=1
gpgcheck=0

[KTnoarch]
name=KT - noarch
baseurl=http://repos.knowledgetree.com/rpm/rhel/5/noarch
enabled=1
gpgcheck=0

[Zend]
name=Zend Server
baseurl=http://repos.zend.com/zend-server/rpm/$basearch
enabled=1
gpgcheck=0

[Zendnoarch]
name=Zend Server - noarch
baseurl=http://repos.zend.com/zend-server/rpm/noarch
enabled=1
gpgcheck=0

b) Update Yum
$ yum update
4) Install KnowledgeTree using Yum 
$ yum install knowledgetree-ce
 
 
/etc/init.d/zend-server restart
 
If some package failed try to reinstall with yum separately , 
if mysql not installed install separately or use default mysql 
 
start with " service mysqld start " and stop httpd service
 
"service httpd stop" 

Monday, May 9, 2011

LVM volume Creation in Xenserver

First add partition to xenserver from xencenter.
 
pvcreate /dev/xvdb1
vgcreate vg02 /dev/xvdb1
lvcreate --name lv02 --size 20G vg02
mke2fs -j /dev/vg02/lv02
mkdir /space or mkdir /home/systest
mount /dev/vg02/lv02 /space

df -h


vi /etc/fstab

/dev/vg02/lv02  /space ext3 defaults 1 2


vg02 or lv02 is just names you can write any , xvdb is partition name so define accordingly.

Tuesday, May 3, 2011

Open source ALM tools continue to gain market share, give the development manager a migraine

The influence of open source on software development is often measured by the impact of successful libraries and frameworks. It’s hard to imagine building a modern web application without open source components. A similar trend is now unfolding in the Application Lifecycle Management (ALM) space, driven by tools created by projects needing to support their own open source software delivery. While ALM tools are often associated with the heavyweight workflow characteristics of enterprise application development, successful open source projects are a great example of the transformation underway in ALM, towards lean methods and lightweight, developer-centric tools.


In contrast with the application development tools which we use for writing and debugging code, ALM tools assist us with an application’s evolution over time. At their core, ALM tools track tasks and changes, help manage builds and releases, and support the dialogue that defines an application’s evolution. This subset of ALM is sometimes referred to as Application Development Management (ADM). On top of this core feature set layer tools for project and product management. Large organizations add additional tools to the stack to handle program and project portfolio management.
Thanks to a combination of resource constraints, a preference for using open source technologies for open development, and the common desire for developers to sharpen and extend their own toolset, the past decade has delivered a critical mass of the open-source ALM tools. Consider the scale of ALM on Eclipse.org: 33M lines of code in the last coordinated release (733 installable features/components), 330K total Bugzilla reports (3K new each month), 15K Bugzilla users, 1K committers (400 active), 300 projects and 173 Hudson build jobs. Add to that dozens of interdependencies between Eclipse projects and other open source projects such as the popular Apache libraries. ALM on Eclipse is managed entirely with open source tools including Bugzilla, CVS, Git, Hudson, MediaWiki and Mylyn.

The 1,948 respondents to the 2010 Eclipse Community Survey provide an overview of the degree to which open source tools have percolated into commercial software development. Only a small fraction of the survey respondents were directly involved with Eclipse, and half were from organizations with over 100 employees. The striking pattern is that the core open source ALM tools, when combined, have the market lead in each of three key ALM categories visible in the figure below. In 2010 for these categories, open-source ALM has taken the market lead from closed-source solutions. While surveys of this sort are always skewed towards the type of developer who bothers to answer surveys, this result remains indicative of a shift in application development practices and open-source ALM market share. In 2011, I predict that this trend will continue and that open source tools will percolate into the ALM stacks of more conservative organizations. A degree or two of separation from their open source counterparts, many of those developers will not recognize the term DVCS before it is force fed to them.










The attraction to open source ALM is not just price point, but the amount of innovation that has been driven by open source developers building tools to support their own productivity patterns. The ecosystem of extensions that forms around popular open source projects is another key driver of adoption. Those ecosystems are also likely to produce the most interesting integrations, since open source has proven itself as the most effective mechanism for growing both the community and APIs needed for innovative extensions. Finally, organizations with long application and product lifecycles are attracted to open source ALM tools because a suitable open source license gives them confidence that they will be able to access and extend the knowledge base represented by their issue tracker ten years from now, when the landscape of ALM vendors will much different than it does today.
Open source ALM tools are built on developer-centric principles. Transparency is favoured over hierarchy, with very task and bug on Eclipse and Mozilla being editable by anyone. It relies on asynchronous collaboration and a consistent use of the task or issue tracker to capture all discussion relevant to changes in the software. It encourages lining up modularity and framework boundaries with team boundaries, allowing API layers to facilitate dependencies and collaboration. There are also things missing from the stack. Responsiveness to the community often takes precedence over planning, and after the fading away of XPlanner, there has been a distinct gap in project management features within the open source tool space. There is also no single integrated open source ALM stack, instead open source projects glue together their own best of breed solutions, and layer customizations on top, as is the case with the numerous tools built on the Eclipse Bugzilla repository. Integration with product, project and portfolio management tools is typically non-existent, as this is not something that even large open source projects need.








While open-source developers will continue working very happily with their increasingly slick tool set, this impedance mismatch with large-scale ALM implies major problems for organizations who are planning to get a lot of ALM functionality for free. There mismatch between both the toolset and the cultural aspects of open source ALM tools and what’s needed by the enterprise. Agile and lean development have the opportunity to bridge some of the cultural gap, but still have a considerable way to go in order to incorporate the lessons learned from open source. There is enough of a gap in the toolset that organizations already deploying open-source tools at the enterprise ALM scale have needed to set up their own ALM tool engineering teams. These teams create enterprise-level authentication and access control, provide third-party ALM tool integrations, and implement support for features such as linking existing Project Portfolio Management (PPM) tools. Due to the pace of change in open source ALM tools, they are fighting a losing battle. While wasteful, this exercise is currently necessary. Large organizations that fail to integrate already deployed open source tools into their existing ALM and PPM infrastructure will see a dramatic reduction in the predictability of their development process, since their process relies on a connectivity between development and planning tools that was present in the more traditional ALM tool stack.
There is hope. First, the benefits of open-source ALM tools are fundamental as the ease with which they allow developers to work makes us happier and more productive. The velocity of successful open-source projects demonstrates how good these tools are at supporting the delivery of high-value software that is responsive to the needs of the community and customer. On the flipside, Enterprise ALM tools provide management and planning facilities which are critical for predictability of delivery as well as the longer-term planning that is necessary for organizations to thrive. These two worlds must be integrated into a cohesive whole, especially as more Agile teams find themselves at the intersection of open source and enterprise ALM.
After I presented my keynote on open source ALM at W-JAX last November, a colleague from one of the world’s largest software companies commented that the same developers that worked on open-source projects were twice as productive as when they worked on closed source projects. We discussed the usual issues of motivation and incentive structure, but nailed down the key issue being the sheer friction generated by traditional ALM tools which has been removed from open source ALM tools. It is time to reconnect the software planning process to a new breed of open source ALM tools that support lean and high velocity delivery, connect them to the planning and management tools needed for software delivery at scale, and bring some of the benefits of open source development to the enterprise.

http://endeavour-mgmt.sourceforge.net/
jabox.org

Friday, April 29, 2011

Install Subversion in Cent OS

  1. Install packages using yum

    yum install subversion mod_dav_svn
    
    
  2. Configure /etc/httpd/conf.d/subversion.conf as you wish.

    LoadModule dav_svn_module     modules/mod_dav_svn.so
    LoadModule authz_svn_module   modules/mod_authz_svn.so
     
    
        DAV svn
        SVNParentPath /var/www/svn
        
           AuthType Basic
           AuthName "Subversion repos"
           AuthUserFile /var/www/svn/htpasswd
           Require valid-user
        
    
     
  3. Prepare the repository.

    mkdir /var/www/svn
    cd /var/www/svn
    svnadmin create myrepo
    chown -R apache.apache myrepo
     
  4. Now we are ready to restart httpd.
    service httpd restart
    
Here we go ....

Wednesday, March 16, 2011

Jira JEMH configuration

Original post here

 

JEMH Installation

  1. Add the JEMH JAR
    Get the JAR specific for your version of Jira from the top of the page and copy to the Jira WEB-INF/lib folder
  2. Add the ldaputils jar (if you use ldap)
    Get the latest ldaputils jar (currently 1.0.13) from the LDAP Util library page, take the example-ldaputil.properties file from the JAR and rename it ldaputil.properties, editing it as appropriate
  3. Setup Logging (optional)
    Modify / copy-and-modify the WEB-INF/classes/log4j.properties , this will enable logging of all EMH messages to a separate file. Default is DEBUG, other values are 'INFO', 'WARN' and 'ERROR'.
    #
    # Create a separate appender
    #
    log4j.appender.EMHFileLog=org.apache.log4j.RollingFileAppender
    log4j.appender.EMHFileLog.File=atlassian-jira-emh.log
    log4j.appender.EMHFileLog.MaxFileSize=20480KB
    log4j.appender.EMHFileLog.MaxBackupIndex=5
    log4j.appender.EMHFileLog.layout=org.apache.log4j.PatternLayout
    log4j.appender.EMHFileLog.layout.ConversionPattern=%d %t %p [%c{4}] %m%n
    log4j.appender.EMHFileLog.Threshold=DEBUG
    
    #
    # add entries for the three EMH packages
    #
    log4j.logger.com.dolby.atlassian.jira.service.util.handler=DEBUG, EMHFileLog
    log4j.additivity.com.dolby.atlassian.jira.service.util.handler=false
    
    log4j.logger.com.dolby.atlassian.jira.service.util.handler.emh=DEBUG, EMHFileLog
    log4j.additivity.com.dolby.atlassian.jira.service.util.handler.emh=false
    
    log4j.logger.com.dolby.atlassian.jira.service.util.handler.emh.processor=DEBUG, EMHFileLog
    log4j.additivity.com.dolby.atlassian.jira.service.util.handler.emh.processor=false
    
    log4j.logger.com.dolby.atlassian.jira.service.util.handler.emh.service=DEBUG, EMHFileLog
    log4j.additivity.com.dolby.atlassian.jira.service.util.handler.emh.service=false
Enabling detailed logging of lower level mail operations in Jira is covered in Logging email protocol details.
What no log file?
Whether working on Windows or Linux, a lack of recently updated atlassian-jira.log and atlassian-jira-emh.log usually means file permissions are at fault....
  1. Make Jira aware of the new handler
    POP / Imap?
    This example shows how to setup with POP, Imap is equally possible, just not documented, to use Imap, reconfigure Dovecot, and instead of modifying .../pop/popservice.xml modify .../imap/imapservice.xml.
    Update WEB-INF/classes/services/com/atlassian/jira/service/services/pop/popservice.xml to include:
    
      com.dolby.atlassian.jira.service.util.handler.CreateOrCommentHandler
      Extendable Mail Handler - CreateOrComment
    
  2. Ensure a valid POP Sever is configured
    This is under Global Settings/Mail Servers. If the virtual mailbox approach is used, only one mailbox needs to be setup, in the example shared@jira.myco.net:
  3. Configure the handler
    Modifiy the two properties files. when done, they should be copied to WEB-INF/classes:
    If you use ldap, the minimum config required is to identify a valid user which is used to validate ldap connections. If not, you don't need this file.
    The EMH config file is just an extension of the properties that a mail handler is given, anything you could put in the Jira UI, you can equally add here. Look at the settings, understand what they do, make changes as appropriate for your needs.
    The contents of the configuration file changes as new features are added and refinements are made, check the 'example-emh.properties' available inside every JAR.
Some example properties are:
projectAutoAssign=true, issuetype=3, configFile=emh.properties

Emh.properties


extract example-emh.properties from jar
unzip filename.jar
rename example-emh.properties to emh.properties
copy emh.properties file are to WEB-INF/classes/

Service Properties:

Service listing after save:

  1. Restart Jira

Friday, March 11, 2011

Cron :: What is Cron ? Linux / Unix

Cron
This file is an introduction to cron, it covers the basics of what cron does,
and how to use it.

What is cron?
Cron is the name of program that enables unix users to execute commands or
scripts (groups of commands) automatically at a specified time/date. It is
normally used for sys admin commands, like makewhatis, which builds a
search database for the man -k command, or for running a backup script, 
but can be used for anything. A common use for it today is connecting to 
the internet and downloading your email.

This file will look at Vixie Cron, a version of cron authored by Paul Vixie.

How to start Cron
Cron is a daemon, which means that it only needs to be started once, and will 
lay dormant until it is required. A Web server is a daemon, it stays dormant 
until it gets asked for a web page. The cron daemon, or crond, stays dormant 
until a time specified in one of the config files, or crontabs.

On most Linux distributions crond is automatically installed and entered into 
the start up scripts. To find out if it's running do the following:

cog@pingu $ ps aux | grep crond
root       311  0.0  0.7  1284  112 ?        S    Dec24   0:00 crond
cog       8606  4.0  2.6  1148  388 tty2     S    12:47   0:00 grep crond
The top line shows that crond is running, the bottom line is the search
we just run.

If it's not running then either you killed it since the last time you rebooted,
or it wasn't started.

To start it, just add the line crond to one of your start up scripts. The 
process automatically goes into the back ground, so you don't have to force
it with &. Cron will be started next time you reboot. To run it without 
rebooting, just type crond as root:

root@pingu # crond
With lots of daemons, (e.g. httpd and syslogd) they need to be restarted 
after the config files have been changed so that the program has a chance 
to reload them. Vixie Cron will automatically reload the files after they 
have been edited with the crontab command. Some cron versions reload the
files every minute, and some require restarting, but Vixie Cron just loads 
the files if they have changed.

Using cron
There are a few different ways to use cron (surprise, surprise). 

In the /etc directory you will probably find some sub directories called 
'cron.hourly', 'cron.daily', 'cron.weekly' and 'cron.monthly'. If you place 
a script into one of those directories it will be run either hourly, daily, 
weekly or monthly, depending on the name of the directory. 

If you want more flexibility than this, you can edit a crontab (the name 
for cron's config files). The main config file is normally /etc/crontab.
On a default RedHat install, the crontab will look something like this:

root@pingu # cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly
The first part is almost self explanatory; it sets the variables for cron.

SHELL is the 'shell' cron runs under. If unspecified, it will default to 
the entry in the /etc/passwd file.

PATH contains the directories which will be in the search path for cron 
e.g if you've got a program 'foo' in the directory /usr/cog/bin, it might 
be worth adding /usr/cog/bin to the path, as it will stop you having to use
the full path to 'foo' every time you want to call it.

MAILTO is who gets mailed the output of each command. If a command cron is 
running has output (e.g. status reports, or errors), cron will email the output 
to whoever is specified in this variable. If no one if specified, then the 
output will be mailed to the owner of the process that produced the output.

HOME is the home directory that is used for cron. If unspecified, it will 
default to the entry in the /etc/passwd file.

Now for the more complicated second part of a crontab file.
An entry in cron is made up of a series of fields, much like the /etc/passwd
file is, but in the crontab they are separated by a space. There are normally
seven fields in one entry. The fields are:

minute hour dom month dow user cmd
minute This controls what minute of the hour the command will run on,
  and is between '0' and '59'
hour This controls what hour the command will run on, and is specified in
         the 24 hour clock, values must be between 0 and 23 (0 is midnight)
dom This is the Day of Month, that you want the command run on, e.g. to
  run a command on the 19th of each month, the dom would be 19.
month This is the month a specified command will run on, it may be specified
  numerically (0-12), or as the name of the month (e.g. May)
dow This is the Day of Week that you want a command to be run on, it can
  also be numeric (0-7) or as the name of the day (e.g. sun).
user This is the user who runs the command.
cmd This is the command that you want run. This field may contain 
  multiple words or spaces.

If you don't wish to specify a value for a field, just place a * in the 
field.

e.g.
01 * * * * root echo "This command is run at one min past every hour"
17 8 * * * root echo "This command is run daily at 8:17 am"
17 20 * * * root echo "This command is run daily at 8:17 pm"
00 4 * * 0 root echo "This command is run at 4 am every Sunday"
* 4 * * Sun root echo "So is this"
42 4 1 * * root echo "This command is run 4:42 am every 1st of the month"
01 * 19 07 * root echo "This command is run hourly on the 19th of July"
Notes:

Under dow 0 and 7 are both Sunday.

If both the dom and dow are specified, the command will be executed when
either of the events happen. 
e.g.
* 12 16 * Mon root cmd
Will run cmd at midday every Monday and every 16th, and will produce the 
same result as both of these entries put together would:
* 12 16 * * root cmd
* 12 * * Mon root cmd

Vixie Cron also accepts lists in the fields. Lists can be in the form, 1,2,3 
(meaning 1 and 2 and 3) or 1-3 (also meaning 1 and 2 and 3).
e.g.
59 11 * * 1,2,3,4,5 root backup.sh
Will run backup.sh at 11:59 Monday, Tuesday, Wednesday, Thursday and Friday,
as will:
59 11 * * 1-5 root backup.sh 

Cron also supports 'step' values.
A value of */2 in the dom field would mean the command runs every two days
and likewise, */5 in the hours field would mean the command runs every 
5 hours.
e.g. 
* 12 10-16/2 * * root backup.sh
is the same as:
* 12 10,12,14,16 * * root backup.sh

*/15 9-17 * * * root connection.test
Will run connection.test every 15 mins between the hours or 9am and 5pm

Lists can also be combined with each other, or with steps:
* 12 1-15,17,20-25 * * root cmd
Will run cmd every midday between the 1st and the 15th as well as the 20th 
and 25th (inclusive) and also on the 17th of every month.
* 12 10-16/2 * * root backup.sh
is the same as:
* 12 10,12,14,16 * * root backup.sh

When using the names of weekdays or months, it isn't case sensitive, but only
the first three letters should be used, e.g. Mon, sun or Mar, jul.

Comments are allowed in crontabs, but they must be preceded with a '#', and
must be on a line by them self.  


Multiuser cron
As Unix is a multiuser OS, some of the apps have to be able to support 
multiple users, cron is one of these. Each user can have their own crontab
file, which can be created/edited/removed by the command crontab. This
command creates an individual crontab file and although this is a text file,
as the /etc/crontab is, it shouldn't be edited directly. The crontab file is
often stored in /var/spool/cron/crontabs/ (Unix/Slackware/*BSD), 
/var/spool/cron/ (RedHat) or /var/cron/tabs/ (SuSE), 
but might be kept elsewhere depending on what Un*x flavor you're running.

To edit (or create) your crontab file, use the command crontab -e, and this
will load up the editor specified in the environment variables EDITOR or 
VISUAL, to change the editor invoked on Bourne-compliant shells, try: 
cog@pingu $ export EDITOR=vi
On C shells:
cog@pingu $ setenv EDITOR vi
You can of course substitute vi for the text editor of your choice.

Your own personal crontab follows exactly the same format as the main
/etc/crontab file does, except that you need not specify the MAILTO 
variable, as this entry defaults to the process owner, so you would be mailed
the output anyway, but if you so wish, this variable can be specified.
You also need not have the user field in the crontab entries. e.g.

min hr dom month dow cmd
Once you have written your crontab file, and exited the editor, then it will
check the syntax of the file, and give you a chance to fix any errors.

If you want to write your crontab without using the crontab command, you can
write it in a normal text file, using your editor of choice, and then use the
crontab command to replace your current crontab with the file you just wrote.
e.g. if you wrote a crontab called cogs.cron.file, you would use the cmd

cog@pingu $ crontab cogs.cron.file
to replace your existing crontab with the one in cogs.cron.file.

You can use 

cog@pingu $ crontab -l 
to list your current crontab, and

cog@pingu $ crontab -r
will remove (i.e. delete) your current crontab.

Privileged users can also change other user's crontab with:

root@pingu # crontab -u  
and then following it with either the name of a file to replace the 
existing user's crontab, or one of the -e, -l or -r options.

According to the documentation the crontab command can be confused by the 
su command, so if you running a su'ed shell, then it is recommended you 
use the -u option anyway.

Controlling Access to cron
Cron has a built in feature of allowing you to specify who may, and who 
may not use it. It does this by the use of /etc/cron.allow and /etc/cron.deny
files. These files work the same way as the allow/deny files for other 
daemons do. To stop a user using cron, just put their name in cron.deny, to
allow a user put their name in the cron.allow. If you wanted to prevent all
users from using cron, you could add the line ALL to the cron.deny file:

root@pingu # echo ALL >>/etc/cron.deny
If you want user cog to be able to use cron, you would add the line cog 
to the cron.allow file:

root@pingu # echo cog >>/etc/cron.allow
If there is neither a cron.allow nor a cron.deny file, then the use of cron
is unrestricted (i.e. every user can use it).  If you were to put the name of
some users into the cron.allow file, without creating a cron.deny file, it
would have the same effect as creating a cron.deny file with ALL in it.
This means that any subsequent users that require cron access should be 
put in to the cron.allow file.  

Output from cron
As I've said before, the output from cron gets mailed to the owner of the
process, or the person specified in the MAILTO variable, but what if you
don't want that? If you want to mail the output to someone else, you can
just pipe the output to the command mail.
e.g.
 
cmd | mail -s "Subject of mail" user
If you wish to mail the output to someone not located on the machine, in the
above example, substitute user for the email address of the person who 
wishes to receive the output.

If you have a command that is run often, and you don't want to be emailed 
the output every time, you can redirect the output to a log file (or 
/dev/null, if you really don't want the output).
e,g

cmd >> log.file
Notice we're using two > signs so that the output appends the log file and 
doesn't clobber previous output.
The above example only redirects the standard output, not the standard error,
if you want all output stored in the log file, this should do the trick:

cmd >> logfile 2>&1
You can then set up a cron job that mails you the contents of the file at
specified time intervals, using the cmd:

mail -s "logfile for cmd" 
 

Now you should be able to use cron to automate things a bit more.
A future file going into more detail, explaining the differences between 
the various different crons and with more worked examples, is planned.

Monday, January 17, 2011

Mod_Dosevasive in Apache

What Is Mod_Dosevasive?

Mod_Dosevasive is an evasive maneuvers module for Apache whose purpose is to react to HTTP DoS and/or Brute Force attacks.
An additional capability of the module is that it is also able to execute system commands when DoS attacks are identified. This provides an interface to send attacking IP addresses to other security applications such as local host-based firewalls to block the offending IP address. Mod_Dosevasive performs well in both single-server attacks, as well as distributed attacks; however, as with any DoS attack, the real concern is network bandwidth and processor/ RAM usage.

How Does Mod_Dosevasive Work?

Mod_Dosevasive identifies attacks by creating and using an internal dynamic hash table of IP Addresses to URIs pairs based on the requests received. When a new request comes into Apache, Mod_Dosevasive will perform the following tasks:
  • The IP address of the client is checked in the temporary blacklist of the hash table. If the IP address is listed, then the client is denied access with a 403 Forbidden.
  • If the client is not currently on the blacklist, then the IP address of the client and the Universal Resource Identifier (URI) being requested are hashed into a key. Mod_Dosevasive will then check the listener's hash table to verify if any of the same hashes exist. If it does, it will then evaluate the total number of matched hashes and the timeframe that they were requested in versus the thresholds specified in the httpd.conf file by the Mod_Dosevasive directives.
  • If the request does not get denied by the preceding check, then just the IP address of the client is hashed into a key. The module will then check the hash table in the same fashion as above. The only difference with this check is that it doesn't factor in what URI the client is checking. It checks to see if the client request number has gone above the threshold set for the entire site per the time interval specified.
Configuration
you should add the following directives to your httpd.conf file

LoadModule dosevasive20_module modules/mod_dosevasive20.so

-IfModule mod_dosevasive20.c
-    DOSHashTableSize    3097
-    DOSPageCount        2
-    DOSSiteCount        50
-    DOSPageInterval     1
-    DOSSiteInterval     1
-    DOSBlockingPeriod   10
-/IfModule