To do the checkout process in windows:
Before start checking out in windows system,the directory must be configured in the cvs by the cvs admin.
The file must be marked as binary in the cvs
To mark a file as binary:
cvs admin -kb name_of_binary_file
Tuesday, December 30, 2008
Tuesday, December 16, 2008
some changes in tomcat
[cit146@cit146 bin]$ export JAVA_HOME=/usr/java/jdk1.6.0_03/
[cit146@cit146 bin]$ export CATALINA_HOME=/home/cit146/software/apache-tomcat-5.5.20/
export CLASSPATH=/home/cit146/software/apache-tomcat-5.5.20/common/lib/servlet-api.jar
[cit146@cit146 bin]$ ./startup.sh
Using CATALINA_BASE: /home/cit146/software/apache-tomcat-5.5.20/
Using CATALINA_HOME: /home/cit146/software/apache-tomcat-5.5.20/
Using CATALINA_TMPDIR: /home/cit146/software/apache-tomcat-5.5.20//temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
[cit146@cit146 bin]$
paste yumemetl.war from home/cit146/dist to /home/cit146/software/apache-tomcat-5.5.20/webapps
paste yumeetl.xml in /home/cit146/software/apache-tomcat-5.5.20/conf/Catalina/localhost
edit server.xml in /home/cit146/software/apache-tomcat-5.5.20/conf [url values]
paste mysql.jar in /home/cit146/software/apache-tomcat-5.5.20/common/lib
edit log4jconfiguratio.xml and quartz.properties /home/cit146/software/apache-tomcat-5.5.20/webapps/yumeetl/WEB-INF/classes
[ * change param value="[%d] [%t] %-5p %c %X{processid} - %m%n"
* (if u want show the output) ]
insert export CATALINA_OPTS="-Dlog4j.configuration=log4jconfiguration.xml" in catalina.sh (before # Get standard environment variables PRGDIR=`dirname "$PRG"`)
Firing process
----------------------
First truncate the following table
delete from QRTZ_CRON_TRIGGERS where TRIGGER_NAME='com.yumecorp.etl.martcleanup_process';
delete from QRTZ_TRIGGERS where TRIGGER_NAME ='com.yumecorp.etl.martcleanup_process';
delete from QRTZ_JOB_DETAILS where JOB_NAME='com.yumecorp.etl.martcleanup_process';
insert into quartz_job_details,quartz_cron_triggers, qyartz_triggers
for eg,
INSERT INTO QRTZ_JOB_DETAILS (JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_VOLATILE, IS_STATEFUL, REQUESTS_RECOVERY, JOB_DATA) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', NULL, 'com.yumecorp.etl.ETLProcessJob', 0, 0, 0, 0, 'jndiName=java:comp/env/jdbc/yume\nmartjndi=yumemart\nportaldb=qadb\nfilePath=martcleanup_etlprocess.xml');
INSERT INTO QRTZ_TRIGGERS (TRIGGER_NAME, TRIGGER_GROUP, JOB_NAME, JOB_GROUP, IS_VOLATILE, DESCRIPTION, NEXT_FIRE_TIME, PREV_FIRE_TIME, TRIGGER_STATE, TRIGGER_TYPE, START_TIME, END_TIME, CALENDAR_NAME, MISFIRE_INSTR, JOB_DATA) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', 'com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', 0, NULL, -1, -1, "WAITING", "CRON", -1, 0, NULL, 0, NULL);
INSERT INTO QRTZ_CRON_TRIGGERS(TRIGGER_NAME, TRIGGER_GROUP, CRON_EXPRESSION,TIME_ZONE_ID) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', '0 15,45 * * * ?', NULL);
Mis Fire
-------------
update QRTZ_TRIGGERS set NEXT_FIRE_TIME=-1 where TRIGGER_NAME='com.yumecorp.etl.martcleanup_process';
show the output in
/home/cit146/software/apache-tomcat-5.5.20/bin/yumeappserver.log
[cit146@cit146 bin]$ export CATALINA_HOME=/home/cit146/software/apache-tomcat-5.5.20/
export CLASSPATH=/home/cit146/software/apache-tomcat-5.5.20/common/lib/servlet-api.jar
[cit146@cit146 bin]$ ./startup.sh
Using CATALINA_BASE: /home/cit146/software/apache-tomcat-5.5.20/
Using CATALINA_HOME: /home/cit146/software/apache-tomcat-5.5.20/
Using CATALINA_TMPDIR: /home/cit146/software/apache-tomcat-5.5.20//temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
[cit146@cit146 bin]$
paste yumemetl.war from home/cit146/dist to /home/cit146/software/apache-tomcat-5.5.20/webapps
paste yumeetl.xml in /home/cit146/software/apache-tomcat-5.5.20/conf/Catalina/localhost
edit server.xml in /home/cit146/software/apache-tomcat-5.5.20/conf [url values]
paste mysql.jar in /home/cit146/software/apache-tomcat-5.5.20/common/lib
edit log4jconfiguratio.xml and quartz.properties /home/cit146/software/apache-tomcat-5.5.20/webapps/yumeetl/WEB-INF/classes
[ * change param value="[%d] [%t] %-5p %c %X{processid} - %m%n"
*
insert export CATALINA_OPTS="-Dlog4j.configuration=log4jconfiguration.xml" in catalina.sh (before # Get standard environment variables PRGDIR=`dirname "$PRG"`)
Firing process
----------------------
First truncate the following table
delete from QRTZ_CRON_TRIGGERS where TRIGGER_NAME='com.yumecorp.etl.martcleanup_process';
delete from QRTZ_TRIGGERS where TRIGGER_NAME ='com.yumecorp.etl.martcleanup_process';
delete from QRTZ_JOB_DETAILS where JOB_NAME='com.yumecorp.etl.martcleanup_process';
insert into quartz_job_details,quartz_cron_triggers, qyartz_triggers
for eg,
INSERT INTO QRTZ_JOB_DETAILS (JOB_NAME, JOB_GROUP, DESCRIPTION, JOB_CLASS_NAME, IS_DURABLE, IS_VOLATILE, IS_STATEFUL, REQUESTS_RECOVERY, JOB_DATA) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', NULL, 'com.yumecorp.etl.ETLProcessJob', 0, 0, 0, 0, 'jndiName=java:comp/env/jdbc/yume\nmartjndi=yumemart\nportaldb=qadb\nfilePath=martcleanup_etlprocess.xml');
INSERT INTO QRTZ_TRIGGERS (TRIGGER_NAME, TRIGGER_GROUP, JOB_NAME, JOB_GROUP, IS_VOLATILE, DESCRIPTION, NEXT_FIRE_TIME, PREV_FIRE_TIME, TRIGGER_STATE, TRIGGER_TYPE, START_TIME, END_TIME, CALENDAR_NAME, MISFIRE_INSTR, JOB_DATA) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', 'com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', 0, NULL, -1, -1, "WAITING", "CRON", -1, 0, NULL, 0, NULL);
INSERT INTO QRTZ_CRON_TRIGGERS(TRIGGER_NAME, TRIGGER_GROUP, CRON_EXPRESSION,TIME_ZONE_ID) VALUES
('com.yumecorp.etl.martcleanup_process', 'com.yumecorp.etl', '0 15,45 * * * ?', NULL);
Mis Fire
-------------
update QRTZ_TRIGGERS set NEXT_FIRE_TIME=-1 where TRIGGER_NAME='com.yumecorp.etl.martcleanup_process';
show the output in
/home/cit146/software/apache-tomcat-5.5.20/bin/yumeappserver.log
Thursday, December 4, 2008
124 error
solution
1) Dont done a operation in empty table
2) One time u got this error , u will go to a another insatnce
1) Dont done a operation in empty table
2) One time u got this error , u will go to a another insatnce
Ant
Ant
Apache Ant is a software tool for automating software build processes. It is similar to make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects.
The most immediately noticeable difference between Ant and make is that Ant uses XML to describe the build process and its dependencies, whereas make has its Makefile format. By default the XML file is named build.xml.
Why another build tool when there is already make, gnumake, nmake, jam, and others? Because all those tools have limitations that Ant's original author couldn't live with when developing software across multiple platforms. Make-like tools are inherently shell-based: they evaluate a set of dependencies, then execute commands not unlike what you would issue on a shell. This means that you can easily extend these tools by using or writing any program for the OS that you are working on; however, this also means that you limit yourself to the OS, or at least the OS type, such as Unix, that you are working on.
Apache Ant is a software tool for automating software build processes. It is similar to make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects.
The most immediately noticeable difference between Ant and make is that Ant uses XML to describe the build process and its dependencies, whereas make has its Makefile format. By default the XML file is named build.xml.
Why another build tool when there is already make, gnumake, nmake, jam, and others? Because all those tools have limitations that Ant's original author couldn't live with when developing software across multiple platforms. Make-like tools are inherently shell-based: they evaluate a set of dependencies, then execute commands not unlike what you would issue on a shell. This means that you can easily extend these tools by using or writing any program for the OS that you are working on; however, this also means that you limit yourself to the OS, or at least the OS type, such as Unix, that you are working on.
Linux commands for mysql
tail -100f catalina.out
less catalina.out
----> to show the file (eg.catalina.out)
cat>f.name (press enter and ctrl+c) -->cleanup process
../bin/startup.sh instead of cd .., cd /bin/startup.sh
cp /home/cit146/Ayyachamy/pro/yumeetl/tmp/yumemetl_27_11_2008_04_13_binary.jar --> curent location to /home/cit146.......
rm -rf catalina.OUT ---> remove a file
cat server.xml | grep "grep used for searching purpose
cat server.xml | grep "wc -l
ps -ef | grep tomcat --->show how many tomcat are running
kill -9 8738
88]# scp server.xml root@192.168.1.146:/usr/local ---> copy a server.xml from 192.168.1.88 to 192.168.1.146
mysql>show processlist --> How many process ran in the current databas
less catalina.out
----> to show the file (eg.catalina.out)
cat>f.name (press enter and ctrl+c) -->cleanup process
../bin/startup.sh instead of cd .., cd /bin/startup.sh
cp /home/cit146/Ayyachamy/pro/yumeetl/tmp/yumemetl_27_11_2008_04_13_binary.jar --> curent location to /home/cit146.......
rm -rf catalina.OUT ---> remove a file
cat server.xml | grep "
cat server.xml | grep "
ps -ef | grep tomcat --->show how many tomcat are running
kill -9 8738
88]# scp server.xml root@192.168.1.146:/usr/local ---> copy a server.xml from 192.168.1.88 to 192.168.1.146
mysql>show processlist --> How many process ran in the current databas
Wednesday, November 26, 2008
Installation of tomcat in linux
First download package
then extract the package
[bbusschots@honeysuckle ~]$ tar -xzf apache-tomcat-5.5.17.tar.gz
[bbusschots@honeysuckle ~]$ sudo mv apache-tomcat-5.5.17 /usr/local/
[bbusschots@honeysuckle ~]$ cd /usr/local/
[bbusschots@honeysuckle local]$ sudo ln -s apache-tomcat-5.5.17/ tomcat
Perform the above action
setting environment variables
-----------------------------------------
1. JAVA_HOME - needs to point to your Java install. (If you used the latest Sun RPM
that will be /usr/java/jdk1.5.0_6)
2. CATALINA_HOME - should be set to /usr/local/tomcat
BY USING
[root@cit146 bin]# export JAVA_HOME=/usr/java/jdk1.6.0_03/
[root@cit146 bin]# export CATALINA_HOME=/usr/local/tomcat
You are now ready to start Tomcat with the command /usr/local/tomcat/bin/startup.sh and stop Tomcat with the command /usr/local/tomcat/bin/shutdown.sh.
Tomcat will not start automatically at boot though.
Note : [root@cit146 bin]# ./startup.sh
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
touch: cannot touch `/usr/local/tomcat/logs/catalina.out': No such file or directory
solution : we make a logs directory in /usr/local/tomcat/
[root@cit146 bin]# ./startup.sh
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
show the status
---------------------
[root@cit146 tomcat]# cd logs
[root@cit146 logs]# tail -100f catalina.out
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFO: The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/jdk1.6.0_03/jre/lib/i386/server:/usr/java/jdk1.6.0_03/jre/lib/i386:/usr/java/jdk1.6.0_03/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
Nov 26, 2008 4:04:50 PM org.apache.coyote.http11.Http11BaseProtocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Nov 26, 2008 4:04:50 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 1129 ms
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/5.5.20
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardHost start
INFO: XML validation disabled
Nov 26, 2008 4:04:51 PM org.apache.coyote.http11.Http11BaseProtocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Nov 26, 2008 4:04:52 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Nov 26, 2008 4:04:52 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/23 config=null
Nov 26, 2008 4:04:52 PM org.apache.catalina.storeconfig.StoreLoader load
INFO: Find registry server-registry.xml at classpath resource
Nov 26, 2008 4:04:52 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 1483 ms
Detail available in http://www.bartbusschots.ie/blog/?p=240
then extract the package
[bbusschots@honeysuckle ~]$ tar -xzf apache-tomcat-5.5.17.tar.gz
[bbusschots@honeysuckle ~]$ sudo mv apache-tomcat-5.5.17 /usr/local/
[bbusschots@honeysuckle ~]$ cd /usr/local/
[bbusschots@honeysuckle local]$ sudo ln -s apache-tomcat-5.5.17/ tomcat
Perform the above action
setting environment variables
-----------------------------------------
1. JAVA_HOME - needs to point to your Java install. (If you used the latest Sun RPM
that will be /usr/java/jdk1.5.0_6)
2. CATALINA_HOME - should be set to /usr/local/tomcat
BY USING
[root@cit146 bin]# export JAVA_HOME=/usr/java/jdk1.6.0_03/
[root@cit146 bin]# export CATALINA_HOME=/usr/local/tomcat
You are now ready to start Tomcat with the command /usr/local/tomcat/bin/startup.sh and stop Tomcat with the command /usr/local/tomcat/bin/shutdown.sh.
Tomcat will not start automatically at boot though.
Note : [root@cit146 bin]# ./startup.sh
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
touch: cannot touch `/usr/local/tomcat/logs/catalina.out': No such file or directory
solution : we make a logs directory in /usr/local/tomcat/
[root@cit146 bin]# ./startup.sh
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/java/jdk1.6.0_03/
show the status
---------------------
[root@cit146 tomcat]# cd logs
[root@cit146 logs]# tail -100f catalina.out
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.AprLifecycleListener lifecycleEvent
INFO: The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/jdk1.6.0_03/jre/lib/i386/server:/usr/java/jdk1.6.0_03/jre/lib/i386:/usr/java/jdk1.6.0_03/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib
Nov 26, 2008 4:04:50 PM org.apache.coyote.http11.Http11BaseProtocol init
INFO: Initializing Coyote HTTP/1.1 on http-8080
Nov 26, 2008 4:04:50 PM org.apache.catalina.startup.Catalina load
INFO: Initialization processed in 1129 ms
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardService start
INFO: Starting service Catalina
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardEngine start
INFO: Starting Servlet Engine: Apache Tomcat/5.5.20
Nov 26, 2008 4:04:50 PM org.apache.catalina.core.StandardHost start
INFO: XML validation disabled
Nov 26, 2008 4:04:51 PM org.apache.coyote.http11.Http11BaseProtocol start
INFO: Starting Coyote HTTP/1.1 on http-8080
Nov 26, 2008 4:04:52 PM org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
Nov 26, 2008 4:04:52 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/23 config=null
Nov 26, 2008 4:04:52 PM org.apache.catalina.storeconfig.StoreLoader load
INFO: Find registry server-registry.xml at classpath resource
Nov 26, 2008 4:04:52 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 1483 ms
Detail available in http://www.bartbusschots.ie/blog/?p=240
Sunday, November 16, 2008
mysql dump
how import or export a database from localhost or other host
[ note :
Export
[root@cit146 Desktop]# mysqldump -u root -ppassword -h 192.168.1.88 -P4001 qadb_1 >88_qadb_1.sql
then we create a database in a particular instance......
import
[root@cit146 Desktop]#mysql -u root -ppassword -h 192.168.1.146 -P4001 qadb_2 <88_qadb_1.sql
ERROR 1031 (HY000) at line 4560: Table storage engine for 'event_summary_watchedchapter1' doesn't have this option
debugging
---------------
1) show create table event_summary_watchedchapter1; // to show internal structure
In this stucture Engine =federated table
so go to vim *.sql
delete(double time press d) dump process in the editor
2) second thought is federated table link to the unreachable host or unknown table
it is always known by status or show create table command
[ note :
Export
[root@cit146 Desktop]# mysqldump -u root -ppassword -h 192.168.1.88 -P4001 qadb_1 >88_qadb_1.sql
then we create a database in a particular instance......
import
[root@cit146 Desktop]#mysql -u root -ppassword -h 192.168.1.146 -P4001 qadb_2 <88_qadb_1.sql
ERROR 1031 (HY000) at line 4560: Table storage engine for 'event_summary_watchedchapter1' doesn't have this option
debugging
---------------
1) show create table event_summary_watchedchapter1; // to show internal structure
In this stucture Engine =federated table
so go to vim *.sql
delete(double time press d) dump process in the editor
2) second thought is federated table link to the unreachable host or unknown table
it is always known by status or show create table command
Monday, November 3, 2008
install
to show all java version in ur sys
/usr/sbin/alternatives --config java
java -version
-------------------------------------------------
The simplest way to install or uninstall a packages in linux
by using yum tool...
for ex. yum install eclipse
the installation file will be store in /usr/share/eclipse
------------------------------------------------------------------------------------
yum
yum is a software package manager. It is a tool for installing, updating, and removing packages and their dependencies on RPM-based systems.
It automatically computes dependencies and figures out what things should occur to install packages. It makes it easier to maintain groups of machines without having to manually update each one using rpm.
advantage
----------------
** It can change to new mirrors on the fly out of a internet based mirror list.
when your usual mirrors brakes down yum jumps to another one (chosen by chance) which make it very smooth to use the tool even if there are heavy problems with some of the main mirrors. It also balances the load on the servers.
** Package signature tests, the keys can be downloaded from a given internet address
it adds a tiny bit more of security ....
---------------------------------------------------------------------------
to access the shared documents
smb://192.168.1.200/g/shared
------------------------------------------------------------------
mysql version
mysql Ver 14.12 Distrib 5.0.50sp1a, for redhat-linux-gnu (i686) using readline 5.0
rem pts:
paste a mysql folder to /var/lib/mysal3310/..... it contains some of .myd ,.frm files.....
because it is used to create .sock and .pid files
mysql -u demouser -p -h 192.168.1.88 -P 3310
3310 denote the port no..
this instance start by using mysqld_multi start 3310;
running instances show by mysqld_multi report;
squirrel installation:
download a squirrel-sql-2.6.8-install.jar file;
then it is open by java application;
installation is complted;
make mysql server connection:
edit the extra class path /home/cit146/workspace/yumeetl/lib/mysql.jar
class name as com.mysql.jdbc.Driver
Add aliases
select driver as mysqlserver
example url dbc:mysql://192.168.1.88:3309/qa_martdb_2_0
user name root
click---- test
/usr/sbin/alternatives --config java
java -version
-------------------------------------------------
The simplest way to install or uninstall a packages in linux
by using yum tool...
for ex. yum install eclipse
the installation file will be store in /usr/share/eclipse
------------------------------------------------------------------------------------
yum
yum is a software package manager. It is a tool for installing, updating, and removing packages and their dependencies on RPM-based systems.
It automatically computes dependencies and figures out what things should occur to install packages. It makes it easier to maintain groups of machines without having to manually update each one using rpm.
advantage
----------------
** It can change to new mirrors on the fly out of a internet based mirror list.
when your usual mirrors brakes down yum jumps to another one (chosen by chance) which make it very smooth to use the tool even if there are heavy problems with some of the main mirrors. It also balances the load on the servers.
** Package signature tests, the keys can be downloaded from a given internet address
it adds a tiny bit more of security ....
---------------------------------------------------------------------------
to access the shared documents
smb://192.168.1.200/g/shared
------------------------------------------------------------------
mysql version
mysql Ver 14.12 Distrib 5.0.50sp1a, for redhat-linux-gnu (i686) using readline 5.0
rem pts:
paste a mysql folder to /var/lib/mysal3310/..... it contains some of .myd ,.frm files.....
because it is used to create .sock and .pid files
mysql -u demouser -p -h 192.168.1.88 -P 3310
3310 denote the port no..
this instance start by using mysqld_multi start 3310;
running instances show by mysqld_multi report;
squirrel installation:
download a squirrel-sql-2.6.8-install.jar file;
then it is open by java application;
installation is complted;
make mysql server connection:
edit the extra class path /home/cit146/workspace/yumeetl/lib/mysql.jar
class name as com.mysql.jdbc.Driver
Add aliases
select driver as mysqlserver
example url dbc:mysql://192.168.1.88:3309/qa_martdb_2_0
user name root
click---- test
Thursday, October 23, 2008
connction settings
mysql -u demouser -p -h 192.168.1.88 -p 4003
take the latest java version from sun install it
use /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_03/bin/java 2
then
/usr/sbin/alternatives --config java
take the latest java version from sun install it
use /usr/sbin/alternatives --install /usr/bin/java java /usr/java/jdk1.6.0_03/bin/java 2
then
/usr/sbin/alternatives --config java
Wednesday, October 22, 2008
cvs
CVS - Concurrent Versions System
CVS is a version control system, an important component of Source Configuration Management (SCM). Using it, you can record the history of sources files, and documents. It fills a similar role to the free software RCS, PRCS, and Aegis packages.CVS is a production quality system in wide use around the world
Features over rcs;
-------------------------
client server cvs:
------------------------
- The version history is stored on a single central server and the client machines have a copy of all the files that the developers are working on. Therefore, the network between the client and the server must be up to perform CVS operations (such as checkins or updates) but need not be up to edit or manipulate the current versions of the files. Clients can perform all the same operations which are available locally.
- In cases where several developers or teams want to each maintain their own version of the files, because of geography and/or policy, CVS's vendor branches can import a version from another team (even if they don't use CVS), and then CVS can merge the changes from the vendor branch with the latest files if that is what is desired.
RCS
-------
The Revision Control System (RCS) manages multiple revisions of files. RCS automates the storing, retrieval, logging, identification, and merging of revisions.
RCS is useful for text that is revised frequently, including source code, programs, documentation, graphics, papers, and form letters.
Agies
---------
Aegis is a transaction-based software configuration management system. It provides a framework within which a team of developers may work on many changes to a program independently, and Aegis coordinates integrating these changes back into the master source of the program, with as little disruption as possible.
Tuesday, October 21, 2008
Dbms

OLTP Benefits
------------------
Online Transaction Processing has two key benefits: simplicity and efficiency.
Reduced paper trails and the faster, more accurate forecasts for revenues and expenses are both examples of how OLTP makes things simpler for businesses. It also provides a concrete foundation for a stable organization because of the timely updating. Another simplicity factor is that of allowing consumers the choice of how they want to pay, making it that much more enticing to make transactions.
OLTP is proven efficient because it vastly broadens the consumer base for an organization, the individual processes are faster, and it’s available 24/7.
Dis adv
----------
B2B transactions, businesses must go offline to complete certain steps of an individual process, causing buyers and suppliers to miss out on some of the efficiency benefits that the system provides.As simple as OLTP is, the simplest disruption in the system has the potential to cause a great deal of problems, causing a waste of both time and money. Another economic cost is the potential for server failures. This can cause delays or even wipe out an immeasurable amount of data.
OLAP
--------
Online Analytical Processing, or OLAP (IPA: /ˈoʊlæp/), is an approach to quickly provide answers to analytical queries that are multi-dimensional in nature.[1] OLAP is part of the broader category business intelligence, which also encompasses relational reporting and data mining.[2] The typical applications of OLAP are in business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas. The term OLAP was created as a slight modification of the traditional database term OLTP (Online Transaction Processing).[3]
Data warehouse
-----------------------
Data warehouses are designed to facilitate reporting and analysis[1].
This classic definition of the data warehouse focuses on data storage. However, the means to retrieve and analyze data, to extract, transform and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata.
benefit:
-----------
A data warehouse provides a common data model for all data of interest regardless of the data's source. This makes it easier to report and analyze information than it would be if multiple data models were used to retrieve information such as sales invoices, order receipts, general ledger charges, etc.
Architechture
---------------------
- Operational database layer
- ---------------------------------------
- The source data for the data warehouse - An organization's ERP systems fall into this layer.
- Informational access layer
- --------------------------------------
- The data accessed for reporting and analyzing and the tools for reporting and analyzing data - Business intelligence tools fall into this layer. And the Inmon-Kimball differences about design methodology, discussed later in this article, have to do with this layer.
- Data access layer
- --------------------------
- The interface between the operational and informational access layer - Tools to extract, transform, load data into the warehouse fall into this layer.
- Metadata layer
- ---------------------
- The data directory - This is often usually more detailed than an operational system data directory. There are dictionaries for the entire warehouse and sometimes dictionaries for the data that can be accessed by a particular reporting and analysis tool.
Data mining
------------------
Data mining is the process of sorting out and analyzing data in a data warehouse or data mart.
Diff bw dwh and dm
-----------------------------
data mart tends to start from an analysis of user needs and that a data warehouse tends to start from an analysis of what data already exists and how it can be collected in such a way that the data can later be used.
data mart
---------------
Data marts are often derived from subsets of data in a data warehouse, though in the bottom-up data warehouse design methodology the data warehouse is created from the union of organizational data marts.
the data

Operational data store
----------------------------------
An operational data store (or "ODS") is a database designed to integrate data from multiple sources to make analysis and reporting easier.



Normalization
--------------------
Normalization splits up data to avoid redundancy (duplication) by moving commonly repeating groups of data into a new table. Normalization therefore tends to increase the number of tables that need to be joined in order to perform a given query, but reduces the space required to hold the data and the number of places where it needs to be updated if the data changes.
star and snowflake schema
----------------------------------------
The star and snowflake schema are most commonly found in dimensional data warehouses and data marts where speed of data retrieval is more important than the efficiency of data manipulations. As such, the tables in these schema are not normalized much, and are frequently designed at a level of normalization short of third normal form.
star schema
------------------
a fact table consists of the measurements, metrics or facts of a business process. It is often located at the centre of a star schema, surrounded by dimension tables.
Entity model
------------------
An entity-relationship model (ERM) is an abstract conceptual representation of structured data. Entity-relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion. Diagrams created using this process are called entity-relationship diagrams,
benefits:
-------------
Entity modelling can aid the understanding of an organisation's data, both computerised and non-computerised, for the strategic benefit of the organisation and as an aid to communications within and across its boundaries.
entity model as an aid to information management within an organisation.
The benefits of entity model clustering to the organisation, for end-user computing, to the information systems department, and to the entity modelling process are discussed.
sql
----
SQL (Structured Query Language) is a database computer language designed for the retrieval and management of data in relational database management systems (RDBMS), database schema creation and modification, and database object access control management.[2][3]
Dql
------
DQL (Documentum Query Language) is a query language which allows you to do very complex queries involving:
1. Property searches
2. Searches for words and phrases within documents
3. Other specialized searching capabilities added for document and content management
GROUP BY
---------------
The GROUP BY statement is used in conjunction with the aggregate functions to group the result-set by one or more columns.
ORDER BY
---------------
The ORDER BY keyword is used to sort the result-set by a specified column.
First Normal Form (1NF)
---------------------------------
First normal form (1NF) sets the very basic rules for an organized database:
* Eliminate duplicative columns from the same table.
* Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).
Second Normal Form (2NF)
--------------------------------------
Second normal form (2NF) further addresses the concept of removing duplicative data:
* Meet all the requirements of the first normal form.
* Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
* Create relationships between these new tables and their predecessors through the use of foreign keys.
Third Normal Form (3NF)
-------------------------------------
Third normal form (3NF) goes one large step further:
* Meet all the requirements of the second normal form.
* Remove columns that are not dependent upon the primary key.
Fourth Normal Form (4NF)
--------------------------------------
Finally, fourth normal form (4NF) has one additional requirement:
* Meet all the requirements of the third normal form.
* A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.
If you'd like to ensure your database is normalized, explore our other articles in this series:
* Database Normalization Basics
* Putting your Database in First Normal Form
* Putting your Database in Second Normal Form
* Putting your Database in Third Normal Form
If you want to receive notifications of new database articles posted on this site, Subscribe to our newsletter
Index
--------
A database index is a data structure that improves the speed of operations on a database table. Indices can be created using one or more columns of a database table, providing the basis for both rapid random lookups and efficient access of ordered records.
Random access file:
----------------------------
Random access files are useful for many different applications. One specific example of a good use of random access files are zip files. Zip files contain many other files and are compressed to conserve space. Zip files contain a directory at the end that indicates where the various contained files begin:
An efficient way to extract a particular file from within a zip file is with a random access file:
- open the file
- find and read the directory locating the entry for the desired file
- seek to the position of the desired file
- read it
Random access is sometimes called direct access.
Subscribe to:
Posts (Atom)