Tuesday, 5 July 2011

Common Man Life in INDIA

India is the greatest country.........A country which is having unity in diversity.......A country which is having different religious with different cultures..........Also a Democratic country.
Even though India is still a upcoming and a developing country and it will be a developing country onlynothing more that that.
But we are improving our rank in corruption, population and in indiscipline very fast.Will you
like this improvement, think once why it is happening. Its not happening because of the politicians or the govt it because of us,yes its because of us only.
Common Man life is not in his hands in India. Its in the hands of politicians whatever they say we will blindly follow, we will accept it blindly.If they ask us to do raasta Rokho we will do it, If they ask us to do Bandh we will do it even the educated people also doing like that they wont think about a common man and his problems.This is what India.We wont think of others we always think for our self and our comfortable only and we dont bother about others.
India gives freedom to everybody becasue its a democratic country....
We have a freedom like

We can break the traffic rules.....
They are no strict rules for the people who committed mistakes and no strict rules for the crime.
If we have money we can do anything whatever we want in INDIA..........this is the current situation in INDIA..............So what we need to do.........................(Cont.)








Sunday, 3 July 2011

Ab Initio Interview Questions

What is the relation between EME , GDE and Co-operating system ?

ans. EME is said as enterprise metdata env, GDE as graphical devlopment env and Co-operating sytem can be said as asbinitio server
relation b/w this CO-OP, EME AND GDE is as fallows
Co operating system is the Abinitio Server. this co-op is installed on perticular O.S platform that is called NATIVE O.S .comming to the EME, its i just as repository in informatica , its hold the metadata,trnsformations,db config files source and targets informations. comming to GDE its is end user envirinment where we can devlop the graphs(mapping just like in informatica)
desinger uses the GDE and designs the graphs and save to the EME or Sand box it is at user side.where EME is ast server side.

What is the use of aggregation when we have rollup

as we know rollup component in abinitio is used to summirize group of data record. then where we will use aggregation ?
ans: Aggregation and Rollup both can summerise the data but rollup is much more convenient to use. In order to understand how a particular summerisation being rollup is much more explanatory compared to aggregate. Rollup can do some other functionalities like input and output filtering of records.
Aggregate and rollup perform same action, rollup display intermediat
result in main memory, Aggregate does not support intermediat result
what are kinds of layouts does ab initio supports

Basically there are serial and parallel layouts supported by AbInitio. A graph can have both at the same time. The parallel one depends on the degree of data parallelism. If the multi-file system is 4-way parallel then a component in a graph can run 4 way parallel if the layout is defined such as it's same as the degree of parallelism.

How can you run a graph infinitely?

To run a graph infinitely, the end script in the graph should call the .ksh file of the graph. Thus if the name of the graph is abc.mp then in the end script of the graph there should be a call to abc.ksh.
Like this the graph will run infinitely.

How do you add default rules in transformer?

Double click on the transform parameter of parameter tab page of component properties, it will open transform editor. In the transform editor click on the Edit menu and then select Add Default Rules from the dropdown. It will show two options - 1) Match Names 2) Wildcard.

Do you know what a local lookup is?

If your lookup file is a multifile and partioned/sorted on a particular key then local lookup function can be used ahead of lookup function call. This is local to a particular partition depending on the key.

Lookup File consists of data records which can be held in main memory. This makes the transform function to retrieve the records much faster than retirving from disk. It allows the transform component to process the data records of multiple files fastly.

What is the difference between look-up file and look-up, with a relevant example?

Generally Lookup file represents one or more serial files(Flat files). The amount of data is small enough to be held in the memory. This allows transform functions to retrive records much more quickly than it could retrive from Disk.
A lookup is a component of abinitio graph where we can store data and retrieve it by using a key parameter.
A lookup file is the physical file where the data for the lookup is stored.
How many components in your most complicated graph? It depends the type of components you us.

usually avoid using much complicated transform function in a graph.

Explain what is lookup?

Lookup is basically a specific dataset which is keyed. This can be used to mapping values as per the data present in a particular file (serial/multi file). The dataset can be static as well dynamic ( in case the lookup file is being generated in previous phase and used as lookup file in current phase). Sometimes, hash-joins can be replaced by using reformat and lookup if one of the input to the join contains less number of records with slim record length.
AbInitio has built-in functions to retrieve values using the key for the lookup
What is a ramp limit?
The limit parameter contains an integer that represents a number of reject events

The ramp parameter contains a real number that represents a rate of reject events in the number of records processed.
no of bad records allowed = limit + no of records*ramp.
ramp is basically the percentage value (from 0 to 1)
This two together provides the threshold value of bad records.

Have you worked with packages?

Multistage transform components by default uses packages. However user can create his own set of functions in a transfer function and can include this in other transfer functions.

Have you used rollup component? Describe how.


If the user wants to group the records on particular field values then rollup is best way to do that. Rollup is a multi-stage transform function and it contains the following mandatory functions.
1. initialize
2. rollup
3. finalize
Also need to declare one temporary variable if you want to get counts of a particular group.

For each of the group, first it does call the initialise function once, followed by rollup function calls for each of the records in the group and finally calls the finalise function once at the end of last rollup call.

How do you add default rules in transformer?


Add Default Rules — Opens the Add Default Rules dialog. Select one of the following: Match Names — Match names: generates a set of rules that copies input fields to output fields with the same name. Use Wildcard (.*) Rule — Generates one rule that copies input fields to output fields with the same name.

)If it is not already displayed, display the Transform Editor Grid.
2)Click the Business Rules tab if it is not already displayed.
3)Select Edit > Add Default Rules.

In case of reformat if the destination field names are same or subset of the source fields then no need to write anything in the reformat xfr unless you dont want to use any real transform other than reducing the set of fields or split the flow into a number of flows to achive the functionality.

What is the difference between partitioning with key and round robin?


Partition by Key or hash partition -> This is a partitioning technique which is used to partition data when the keys are diverse. If the key is present in large volume then there can large data skew. But this method is used more often for parallel data processing.

Round robin partition is another partitioning technique to uniformly distribute the data on each of the destination data partitions. The skew is zero in this case when no of records is divisible by number of partitions. A real life example is how a pack of 52 cards is distributed among 4 players in a round-robin manner.

How do you improve the performance of a graph?


There are many ways the performance of the graph can be improved.
1) Use a limited number of components in a particular phase
2) Use optimum value of max core values for sort and join components
3) Minimize the number of sort components
4) Minimize sorted join component and if possible replace them by in-memory join/hash join
5) Use only required fields in the sort, reformat, join components
6) Use phasing/flow buffers in case of merge, sorted joins
7) If the two inputs are huge then use sorted join, otherwise use hash join with proper driving port
8) For large dataset don't use broadcast as partitioner
9) Minimise the use of regular expression functions like re_index in the trasfer functions
10) Avoid repartitioning of data unnecessarily

Try to run the graph as long as possible in MFS. For these input files should be partitioned and if possible output file should also be partitioned.
How do you truncate a table?

From Abinitio run sql component using the DDL "trucate table
By using the Truncate table component in Ab Initio

Have you eveer encountered an error called "depth not equal"?

When two components are linked together if their layout doesnot match then this problem can occur during the compilation of the graph. A solution to this problem would be to use a partitioning component in between if there was change in layout.

What is the function you would use to transfer a string into a decimal?


In this case no specific function is required if the size of the string and decimal is same. Just use decimal cast with the size in the transform function and will suffice. For example, if the source field is defined as string(8) and the destination as decimal(8) then (say the field name is field1).

out.field :: (decimal(8)) in.field

If the destination field size is lesser than the input then use of string_substring function can be used likie the following.
say destination field is decimal(5).

out.field :: (decimal(5))string_lrtrim(string_substring(in.field,1,5)) /* string_lrtrim used to trim leading and trailing spaces */
What are primary keys and foreign keys?

Ab Initio

Ab Initio means “ Starts From the Beginning”. Ab-Initio software works with the client-server model.

The client is called “Graphical Development Environment” (you can call it GDE).It
resides on user desktop.The server or back-end is called Co-Operating System”. The Co-Operating System can reside in a mainframe or unix remote machine.

The Ab-Initio code is called graph ,which has got .mp extension. The graph from GDE is required to be deployed in corresponding .ksh version. In Co-Operating system the
corresponding .ksh in run to do the required job.

How Ab-Initio Job Is Run What happens when you push the “Run” button?
•Your graph is translated into a script that can be executed in the Shell Development
•This script and any metadata files stored on the GDE client machine are shipped (via
FTP) to the server.
•The script is invoked (via REXEC or TELNET) on the server.
•The script creates and runs a job that may run across many hosts.
•Monitoring information is sent back to the GDE client.
Ab-Intio Environment The advantage of Ab-Initio code is that it can run in both the serial and multi-file system environment. Serial Environment: The normal UNIX file system. Muti-File System: Multi-File System (mfs) is meant for parallelism. In an mfs a particular file physically stored across different partition of the machine or even different
machine but pointed by a logical file, which is stored in the co-operating system. The
logical file is the control file which holds the pointer to the physical locations.
About Ab-Initio Graphs: An Ab-Initio graph comprises number of components to serve different purpose. Data is read or write by a component according to the dml ( do not
confuse with the database “data manipulating language” The most commonly used
components are described in the following sections.

Co>Operating System

Co>Operating System is a program provided by AbInitio which operates on the top of the operating system and is a base for all AbInitio processes. It provdes additional features known as air commands which can be installed on a variety of system environments such as Unix, HP-UX, Linux, IBM AIX, Windows systems. The AbInitio CoOperating System provides the following features:
- Manage and run AbInitio graphs and control the ETL processes
- Provides AbInitio extensions to the operating system
- ETL processes monitoring and debugging
- Metadata management and interaction with the EME


AbInitio GDE (Graphical Development Enviroment)

GDE is a graphical application for developers which is used for designing and running AbInitio graphs. It also provides:
- The ETL process in AbInitio is represented by AbInitio graphs. Graphs are formed by components (from the standard components library or custom), flows (data streams) and parameters.
- A user-friendly frontend for designing Ab Initio ETL graphs
- Ability to run, debug Ab Initio jobs and trace execution logs
- GDE AbInitio graph compilation process results in generation of a UNIX shell script which may be executed on a machine without the GDE installed


AbInitio EME

Enterprise Meta>Environment (EME) is an AbInitio repository and environment for storing and managing metadata. It provides capability to store both business and technical metadata. EME metadata can be accessed from the Ab Initio GDE, web browser or AbInitio CoOperating system command line (air commands)


Conduct>It

Conduct It is an environment for creating enterprise Ab Initio data integration systems. Its main role is to create AbInitio Plans which is a special type of graph constructed of another graphs and scripts. AbInitio provides both graphical and command-line interface to Conduct>IT.


Data Profiler

The Data Profiler is an analytical application that can specify data range, scope, distribution, variance, and quality. It runs in a graphic environment on top of the Co>Operating system.


Component Library

The Ab Initio Component Library is a reusable software module for sorting, data transformation, and high-speed database loading and unloading. This is a flexible and extensible tool which adapts at runtime to the formats of records entered and allows creation and incorporation of new components obtained from any program that permits integration and reuse of external legacy codes and storage engines.

Sunday, 19 June 2011

JMS Transport

1) Downloaded ms0n.zip, ms0B.zip, me01.zip and ms0n.tar from IBM support site.


2) Extracted the zip files to the location F:\JMS\GUITool

com.ibm.mq.pcf-6.1.jar
jmsadmingui.jar
mqcontext.jar
jmsadmingui.bat

Note: The above mentioned files were created in this folder.

3) Created jndi.properties.txt in F:\JMS\JNDI

java.naming.factory.initial=com.sun.jndi.fscontext.RefFSContextFactory java.naming.provider.url=file:/ F:\JMS\JNDI

4) Set the correct path settings in jmsadmingui.bat file

Go to F:\JMS\GUITool and edit the jmsadmingui.bat file and set the
• current directory (CUR)
• MQJ
• PATH
• CLASSPATH.

Note: In the CLASSPATH specify all the jar files that were extracted from zip files and jar files located in the C:\PrograFiles\IBM\WebSphere MQ\Java\lib.

5) Modified the JMSAdmin.config file

Navigate to C:\Program Files\ibm\WebSphere MQ\Java\bin under MQSeries installation. Edit JMSAdmin.config file and perform the following modifications



• INITIAL_CONTEXT_FACTORY=com.sun.jndi.fscontext.RefFSContextFactory
• PROVIDER_URL=file:/F:\JMS\GUITool


Details to be filled by the Middleware team:


Step 1: Run (jmsadmingui.bat) Batch File

Go to F:\JMS\GUITool\ jmsadmingui.bat

Step 2: Select the JMSAdmin.config file in the GUI Tool.

C:\Program Files\IBM\WebSphere MQ\Java\bin\ JMSAdmin.config

Sept 3: Specify the file path

file:/F:\JMS\GUITool


Example Queue Creation

Example ConnectionFactory Creation

Save it and make sure .bindings files gets created under F:\JMS\GUITool

JVM Subsystem Setup


The following steps can be used to create the JVM subsystem using the Siebel WebClient


1. Start any Siebel Business Application and navigate to Site Map → Administration → Server Configuration → Enterprises.

2. In the top list applet, select the Enterprise Server that you want to configure.

3. In the middle applet, click the Profile Configuration tab.

4. Click New to create a new component profile and set the following parameters:

 Profile = JAVA
 Alias = JAVA
 Subsystem Type = JVMSubsys

5. In the Profile Parameters list applet (the bottom applet), set the following values:

a) Set the Value of the JVM Classpath parameter to contain the following:

• The location of the JNDI.properties
• The JMS provider JAR files.
• The Siebel.jar and SiebelJI_lang.jar files.

E:\seaNAM811DEV\siebsrvr\CLASSES\ SiebelJI_enu.jar
E:\seaNAM811DEV\siebsrvr\CLASSES\ Siebel.jar

b) Set the Value of the JVM DLL Name parameter to the path where you have the jvm.dll file installed. For example,

C:\Program Files\Java\j2re1.4.2_16\bin\client \jvm.dll

c) Set the Value of the JVM Options record to any JVM-specific options that you would like to enable. For example,

-Xrs -Djava.compiler=NONE

JMS Sub System Setup:

The following procedure can be used for creating the JMS Transport subsystem using the Siebel Web Client.

1. Start any Siebel Business Application and navigate to Administration → Server Configuration → Enterprises.

2. In the top list applet, select the desired Enterprise Server that you want to configure.

3. In the middle applet, click the Profile Configuration tab.

4. Click New to create a new component profile and set the following parameters:

 Profile = M3-HISD-SOFTRAX
 Alias = M3-HISD-SOFTRAX
 Subsystem Type = JMSSubsys

5. In the Profile Parameters list applet (the bottom applet), specify the following parameters

• ConnectionFactory name =com.ibm.mq.jms.MQQueueConnectionFactory
• JVM Subsystem name = JAVA
• ReceiveQueue name =
• Receive Timeout = 20000



Data Handling Sub system Setup:

To create a JMS Receiver subsystem using the Siebel Web Client a user should follow the steps below:

1. Start any Siebel Business Application and navigate to Administration → Server Configuration → Enterprises.

2. In the top list applet, select the desired Enterprise Server.

3. In the middle applet, click the Profile Configuration tab.

4. Click New to create a new component profile and set the following parameters:

 Profile = M3-HISD-SOFTRAX-DHSS
 Alias = M3-HISD-SOFTRAX-DH
 Subsystem Type = EAITransportDataHandlingSubsys

5. In the Profile Parameters list applet (the bottom applet), specify the following parameters

 Workflow Process to Execute = M3 HISD-Softrax RECV WF


Listener Component Setup:


1. Start any Siebel Business Application and navigate to Administration → Server Configuration → Enterprises.

2. In the top list applet, select the desired Enterprise Server.

 Component = M3 HISD-Softrax Integration
 Component Type = Enterprise Application Integration Receiver


3. Component Parameters Applet

 Receiver Service Name = EAI JMS Transport
 EAI JMS Transport = ReceiveDispatch
 Receiver Connection Subsystem = M3-HISD-SOFTRAX
 Receiver Data Handling Subsyst = M3-HISD-SOFTRAX-DH

Siebel Application Object Manager

Siebel Application Object Manager (AOM)

Application Object Managers (AOMs) host the Business Objects layer and Data Objects layer of the Siebel architecture.

It is a server component that creates and processes data at multiple levels.

UI layer (supported by the Siebel Web Engine)
Business object layer
Processes business logic
Data object layer (supported by Data Manager)
The AOM is used primarily to support Siebel Web client connections.

AOMs are hosted as components in the Siebel Server and run on the application server (the machine that hosts the Siebel Server). The Siebel Server provides the infrastructure for an AOM to serve multiple Siebel Web client users. Multiple AOM components can run on a single Siebel Server installation.



AOMs communicate with clients using the TCP/IP protocol through a Web server that contains the Siebel Web Server Extension plug-in (SWSE). Communication between the Web server and the AOM can be compressed and encrypted. An independent session is established to serve incoming connect requests from each client. Subsequent requests from clients are directed to the same AOM tasks until the sessions are terminated.

After startup, AOMs do not achieve their full run-time environments until after the first connect, therefore, leading to possible delays during the first connection.

Script for enrolling students for multiple courses


Account Activity List Applet:
Server Script

function WebApplet_InvokeMethod (MethodName)
{
if (MethodName == "ShowPopup")
 {
 var sIds = "";
 var sTypeAct = "";
 var sDescAct = "";
 var bc = this.BusComp();
 var isRec = bc.FirstSelected();
 while(isRec)
 {

 var sId = bc.GetFieldValue("Id");
 var sType = bc.GetFieldValue("Type");
 var sDesc = bc.GetFieldValue("Description");
 //sIds = sIds +","+ sId;
 sTypeAct = sTypeAct+","+sType;
 sDescAct = sDescAct+","+sDesc


 isRec = bc.NextSelected();
 }

 TheApplication().SetProfileAttr("SRes", sId); //sIds
 TheApplication().SetProfileAttr("Type", sTypeAct);
 TheApplication().SetProfileAttr("Desc", sDescAct);

 //TheApplication().RaiseErrorText("Hai :"+sIds);
 return (CancelOperation);
 }

            return (ContinueOperation);
}

Test Popup Applet:


Browser Script:

function Applet_PreInvokeMethod (name, inputPropSet)
{

if(name == 'PickRecord')
{
if(confirm("Do you want to Copy Student info"))
return ("ContinueOperation");
else
return ("CancelOperation");
return ("ContinueOperation");
}

}


Final:
Server Script

function WebApplet_PreInvokeMethod (MethodName)
{

    if (MethodName == "PickRecord")
    {
              var bc = this.BusComp();
             var isRec = bc.FirstSelected();
                        if(isRec)
                         {
                                      var srowId = bc.GetFieldValue("Id");
  
                        } //IF

             var sTestRes = TheApplication().GetProfileAttr("SRes");
             var sType = TheApplication().GetProfileAttr("Type");
            var sDesc = TheApplication().GetProfileAttr("Desc");
            //TheApplication().RaiseErrorText("Hai:"+sType);

            var sMsg;
            var sMsg1;
             var bo = TheApplication().GetBusObject("Account");
            var bc = bo.GetBusComp("Account");
             var bc1 = bo.GetBusComp("Action");
     
            with(bc)
                {
                         bc.ActivateField("Id");
                         bc.ClearToQuery();
                         bc.SetViewMode(3);
                         bc.SetSearchSpec("Id", srowId);
                          bc.ExecuteQuery(ForwardOnly);
                         var isRec1 = bc.FirstRecord();
            var sAccountId = bc.GetFieldValue("Id");
    if(isRec)
    {
      with(bc1)
      {
     
    var sTypeRecArray = sType.split(",");
    var sDescRecArray = sDesc.split(",");
           
            for(var i = 1; i < sTypeRecArray.length; i++)
            {
//checking        
              bc1.ActivateField("Type");
                         bc1.ClearToQuery();
                         bc1.SetViewMode(3);
                         bc1.SetSearchSpec("Type", sTypeRecArray[i]);
                          bc1.ExecuteQuery(ForwardOnly);
                         var isRec2 = bc1.FirstRecord();
               if(isRec2)
               {
               TheApplication().RaiseErrorText("Record Exist of type:"+sTypeRecArray[i]);
               }
               else
               {                            //checking
                        sMsg = sTypeRecArray[i];  
                bc1.NewRecord(NewAfter);
        // bc1.SetFieldValue("Account Id", sTestRes);
         bc1.SetFieldValue("Type", sTypeRecArray[i]);
         bc1.SetFieldValue("Description", sDescRecArray[i]);
         bc1.WriteRecord();
       }                              //checking
   }
       
        } //WITH
        }   //IF
        }  //WITH          
         

  return (CancelOperation);
 } //METHOD
// TheApplication().SetProfileAttr("SRes", sIds);
     
            return (ContinueOperation);
}

Creating an inbound WS


Objective

This document helps us to create a sample Inbound web service which uses a Business Service. An outbound Web service is created and a workflow is used to call the Inbound that we had created, thereby helping us learn how to create an Inbound and an Outbound Web Services in Siebel.

Creating an inbound

1. Create a Business Service with the following details. You can use the attached XML and import the business Service into your Siebel Tools instead.







Add the following code in the PreInvokeMethod of the Business Service.

function Service_PreInvokeMethod(MethodName,Inputs,Outputs)
{

var z;

switch(MethodName)
{

case 'Add':
z= ToNumber(Inputs.GetProperty("a") )+ToNumber( Inputs.GetProperty("b"));
Outputs.SetProperty("sum",z);
return(CancelOperation);
break;
}

return(CancelOperation);
}


2. Compile the business service to your local srf and move it to server.

3. Create an inbound web service and select the Business Service  you’ve created.



















4. Click on Generate WSDL and save the WSDL file. Here is the WSDL generated by the above eg.














Outbound Web Service

5. Open Siebel Tools (connecting to sample) and import the WSDL.

File -> New Object -> EAI -> Web service


Select the WSDL file.








This will create a proxy business service and a runtime configuration data file. Compile the BS onto the sample.



6. Import the runtime configuration data file in the outbound web services screen of the sample.

Here in this case WSDLexp.xml







7. Create a workflow in sample to call the inbound Web service thru the Proxy BS that we imported in step 5. You can also import the following workflow instead of manually creating the Workflow.



Here are the steps to create the workflow.















8. Right click and select Edit workflow process, create a workflow as shown below and create a new process property called Sum Value








9. Select the Proxy Business service Name and the method.













10. Right click the business service and click on show input arguments. Add the following details



11. Right click the Business service and click on show output arguments. Add the following details.





12. Set the debug settings in view -> options -> debug. Right click and simulate the workflow. Look at the watch window -> sum value Process property for the output.