Translate topic in your language

Thursday, 31 January 2019

What are the ODI Security Profiles ?


Answer:- 



Profile 

A profile represents a generic rights model for working with Oracle Data Integrator. One or more profiles can be assigned to a user.



Predefined profile types



1) CONNECT :-Minimum generic profile for connecting to Oracle Data Integrator. All users must have at least this profile.



2) DESIGNER:-Generic profile for Mapping developers, or users working on the Mappings. This profile give access to any project and project sub components (folders, Mapping , knowledge modules, etc.) stored in the repository. It also authorize users to perform journalizing actions (start journal, create subscriber, etc.) or to run static controls over models and datastores.



3) METADATA ADMIN :- Generic profile for users responsible for managing models and reverse-engineering. Users having this profile are allowed to browse any project in order to select a CKM, RKM or JKM and to attach it to a specific model.



4) OPERATOR :-Generic profile for operators. It allows users to browse execution logs.



5) REPOSITORY EXPLORER :- Generic profile for meta-data browsing through Metadata Navigator. It also allows scenario launching from Metadata Navigator.



6) SECURITY ADMIN :- Generic profile for administrators of user accounts and profiles.



7) TOPOLOGY ADMIN :- Generic profile for users responsible for managing the information system topology. Users granted with this profile are allowed to perform any action through the Topology Manager module.



8) VERSION ADMIN :- Generic profile for managing component versions as well as solutions. This profile must be coupled with DESIGNER and METADATA ADMIN.



9) NG DESIGNER :- Non-Generic profile for DESIGNER.



10) NG METADATA ADMIN :- Non-Generic profile for METADATA ADMIN.



11) NG REPOSITORY EXPLORER :-Non-generic profile for meta-data browsing through Metadata Navigator.



12) NG VERSION ADMIN :- Non-generic profile for VERSION ADMIN. It is recommended that you use this profile with NG DESIGNER and NG METADATA ADMIN.



Generic vs Non-Generic profiles :-

Generic profiles have the Generic privilege option checked for all objects' methods. This implies that a user with such a profile is by default entitled for all methods of all instances of an object to which his profile is entitled.



Non-Generic profiles is not by default entitled for all methods on the instances, as the Generic privilege option is unchecked for all objects' methods. The administrator must grant the user the rights for the methods for each instance.



If an administrator wants a user to have the rights on no instance by default, but wishes to grant the rights by instance, he must grant the user with a non-generic profile.



If an administrator wants a user to have the rights on all instances of an object type by default, he must grant the user with a generic profile.



The following operations are possible on a profile:






  • Creating a new profile

  • Assigning a profile to a user

  • Assigning an authorization by profile

  • Deleting an authorization by profile

  • Removing a profile from a user

  • Deleting a profile



What is Load Plan Explain Load plan and difference between Load plan and package ?


Answer:-



What is Load Plan ?



A Load Plan is an executable object in Oracle Data Integrator that can contain a hierarchy of steps that can be executed conditionally, in parallel or in series. The leaf nodes of this hierarchy are Scenarios. Packages, mappings, variables, and procedures can be added to Load Plans for executions in the form of scenarios.



Load Plans can be started, stopped, and restarted from a command line, from Oracle Data Integrator Studio, Oracle Data Integrator Console or a Web Service interface. They can also be scheduled using the run-time agent's built-in scheduler or an external scheduler. 



A Load Plan can be modified in production environments and steps can be enabled or disabled according to the production needs. Load Plan objects can be designed and viewed in the Designer and Operator Navigators. 



Load Plan Structure:-



A Load Plan is made up of a sequence of several types of steps. Each step can contain several child steps. Depending on the step type, the steps can be executed conditionally, in parallel or sequentially. By default, a Load Plan contains an empty root serial step. This root step is mandatory and the step type cannot be changed.



1) Serial Step

Defines a serial execution of its child steps. Child steps are ordered and a child step is executed only when the previous one is terminated.The root step is a Serial step.



2)Parallel Step



Defines a parallel execution of its child steps. Child steps are started immediately in their order of Priority.



3)Run Scenario Step



Launches the execution of a scenario.



4) Case Step



When Step



Else Steps



The combination of these steps allows conditional branching based on the value of a variable.



5) Exception Step



Defines a group of steps that is executed when an exception is encountered in the associated step from the Step Hierarchy. The same exception step can be attached to several steps in the Steps Hierarchy.



Note: If you have several When steps under a Case step, only the first enabled When step that satisfies the condition is executed. If no When step satisfies the condition or the Case step does not contain any When steps, the Else step is executed.



Differences between Packages, Scenarios, and Load Plans



A Load Plan is the largest executable object in Oracle Data Integrator. It uses Scenarios in its steps. When an executable object is used in a Load Plan, it is automatically converted into a scenario. For example, 



a package is used in the form of a scenario in Load Plans. Note that Load Plans cannot be added to a Load Plan. However, it is possible to add a scenario in form of a Run Scenario step that starts another Load Plan using the OdiStartLoadPlan tool.



Load plans are not substitutes for packages or scenarios, but are used to organize at a higher level the execution of packages and scenarios.



Unlike packages, Load Plans provide native support for parallelism, restartability and exception handling. Load plans are moved to production as is, whereas packages are moved in the form of scenarios. Load Plans can be created in Production environments.



The Load Plan instances and Load Plan runs are similar to Sessions. The difference is that when a session is restarted, the existing session is overwritten by the new execution. The new Load Plan Run does not overwrite the existing Load Plan Run, it is added after the previous Load Plan Runs for this Load Plan Instance. Note that the Load Plan Instance cannot be modified at run-time.

Tuesday, 29 January 2019

What is a Knowledge Module and Types of Knowledge Modules


Answer:- 



What is a Knowledge Module?



Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the ODI Designer module.







Types of Knowledge Modules:-



 1) Reverse-Engineering Knowledge Modules (RKM)

 2) Check Knowledge Modules (CKM)

 3) Loading Knowledge Modules (LKM)

 4) Integration Knowledge Modules (IKM)

 5) Journalizing Knowledge Modules (JKM)

 6) Service Knowledge Modules (SKM)



  1) Reverse-Engineering Knowledge Modules (RKM):-


  "Retrieves metadata to the Oracle Data Integrator work repository.Used in models to perform a customized reverse-engineering"

  







  The RKM role is to perform customized reverse engineering for a model. The RKM is in charge of connecting to the application or metadata provider then transforming and writing the resulting metadata into Oracle Data Integrator's repository. The metadata is written temporarily into the SNP_REV_xx tables. The RKM then calls the Oracle Data Integrator API to read from these tables and write to Oracle Data Integrator's metadata tables of the work repository in incremental update mode.

  

  2) Check Knowledge Modules (CKM):-


  "Checks consistency of data against constraints.Used in models, sub models and datastores for data integrity audit.Used in mappings for flow control or static control"

  









   The CKM can be used in 2 ways:



To check the consistency of existing data. This can be done on any datastore or within mappings, by setting the STATIC_CONTROL option to "Yes". In the first case, the data checked is the data currently in the datastore. In the second case, data in the target datastore is checked after it is loaded.









To check consistency of the incoming data before loading the records to a target datastore. This is done by using the FLOW_CONTROL option. In this case, the CKM simulates the constraints of the target datastore on the resulting flow prior to writing to the target.







The CKM accepts a set of constraints and the name of the table to check. It creates an "E$" error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result.



 3) Loading Knowledge Modules (LKM):-


 "Loads heterogeneous data to a staging area.Used in mappings with heterogeneous sources"









 An LKM is in charge of loading source data from a remote server to the staging area. It is used by mappings when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a "C$" table in the staging area



The LKM creates the "C$" temporary table in the staging area. This table will hold records loaded from the source server.



The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn't have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API).



The LKM loads the records into the "C$" table of the staging area.



A mapping may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required.



4) Integration Knowledge Modules (IKM):-


   "Integrates data from the staging area to a target.Used in mappings"

   

   The IKM is in charge of writing the final, transformed data to the target table. Every mapping uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into "C$" temporary tables in the staging area, or the source datastores are on the same data server as the staging area. Therefore, the IKM simply needs to execute the "Staging and Target" transformations, joins and filters on the "C$" tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the "I$" temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your mapping. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. 

   

   There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not







   5) Journalizing Knowledge Modules (JKM):-


"Creates the Change Data Capture framework objects in the source staging area.Used in models, sub models and datastores to create, start and stop journals and to register subscribers."









JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in mappings, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs



6) Service Knowledge Modules (SKM):-


"Generates data manipulation web services.Used in models and datastores"  

SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture (SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastore's web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment archive files. SKMs are designed to generate Java code using Oracle Data Integrator's framework for Web Services. The code is then compiled and eventually deployed on the Application Server's containers.

What are types of LOG LEVELS in ODI ?


Answer:-



The Log Level field value specifies a user-set level of severity for a specific Command Line and is used to determine which records will be retained in ODI Operator (the journal) after the execution post-processing clean-up operation.



7 levels of user-set values may be specified for the retention of the ODI journal records



ODI Log Level Description



0 –This will retain execution details ONLY in case of execution failure.



1 –This will retain only Session level details.Nothing will be displayed at Session Step, and Session Task level.



2, 3, 4 or 5 –These values are depending on those specified in the Procedure or Knowledge Module Command Line.



For example, if a Command Line is defined with a Log Level = 4, then the runtime information will be displayed in Operator (Journal) only if the user specifies a Log Level of 4 or 5 (command line parameter “-V=4” or value specified in the Agent and Context selection popup Window).



6 — Starting with ODI 11.1.1.6.0, an additional Log Level has been introduced in order to allow the tracking and debugging of Variables, and Sequence values.

With Log Level 6, when reviewing the execution results in ODI Studio > Operator or ODI Console, dig into the “Variables and Sequence Values” section of the Session Step or Session Task editor to view the current value of a Variable or Sequence.



Independent of the precise log level which is used, ODI always writes the complete Session, Step and Task information to the Repository execution-related tables. During the entire execution runtime these details are visible within Operator.



If the process is successful, all Task log records for which the log level value is less than (or equal to) the one set at execution launch time will be saved in the ODI Work Repository at the end of execution and will be visible within Operator.



If the process interrupts with an error message during execution, then, regardless of the Log Level value, all Task records will be retained in the ODI Work Repository and will be visible within Operator.



A Log level value of 5 or 6 will retain all records in ODI Operator at Session termination regardless of termination status of the process.



Impacts of the choice of log level values on ODI runtime performance?




The use of log levels inferior to 5 has an impact on execution performances.



This is explained by the fact that ODI writes to the Repository tables during session runtime.

Subsequently, after the process completes, ODI performs a number of delete instructions in order to retain only the desired logs records.



Therefore:



Yes, using log levels inferior to 5 will result in storing less data in the ODI Repository tables.

and No, this will not improve execution performances (the additional delete instructions addressed to the database require additional processing time and network communication bandwidth).



Best performance fit:



Use a log level of 5

Then either delete the desired level of detail at another moment of day, for example, when the use of database, network and ODI Agent resource are at low peak period.

This can be performed either :




  • manually

  • using the OdiPurgeLog tool at predetermined times








Difference between ODI 11g and ODI 12c.


Answer:- 

















S.No


ODI 11g


ODI 12c


1


No Component Palette


Component Palette Added


2


No Debugger


Debugger


3


Interface for loading source to target


Sub Tabs:


·         Overview


·         Quick Edit


·         Flow


·         Control





Mapping for Loading Source to Target


Sub Tabs:


·         Overview


·         Logical


·         Physical


4


In interface we will have only one target datastore


Mappings we can have more than one data store (i.e., we can load more than one data store at a time)


5


OWB jobs can’t execute


OWB jobs can be executed in ODI 12c


6


To perform Incremental and Initial Load we need to create two different interfaces


To Perform Incremental and initial Load we can accomplish with single mapping (Using Deployment Specification)


7


No Wallet Password


Wallet Password is available


8


Temporary Interface


Reusable Mappings






ODI 12c Features which were not present in ODI 11g:-

1) New mappings

2) Reusable Mappings

3) Debugger

4) In-Session Parallelism

With ODI 12c, it is now possible to have the extract tasks (LKMs) running in parallel. It's actually done by default. If two sources are located in the same execution unit on the physical tab, they will run in parallel. If you want a sequential execution, you can drag and drop one of your units onto a blank area. A new execution unit will be created and ODI will choose in which order it will be loaded.

5) Parallel Target Table Load

With ODI 11g, if two interfaces loading data in the same datastore are executed at the same time or if the same interface is executed twice, you can face some problem. For instance a session might delete a worktable in which the other session wants to insert data.

This now belongs to the past. With 12c, a new "Use Unique Temporary Object Names" checkbox appears in the Physical tab of your mapping. If you select it, the worktables for every session will have a unique name. You are now sure that another session won't delete it or insert other data.

6) Datastore Change Notification

7) Wallet

8) Release Management :- Using Version control system(VCS)

Managing ODI Releases

You can manage ODI releases using deployment archives. A deployment archive is an archived file (zip file) that contains a set of ODI objects in the form of XML files and metadata. You can create deployment archives that can be used to either initialize an ODI repository or to update a deployed ODI repository.

If ODI is integrated with a VCS, deployment archives can be created from the VCS labels. If ODI is not integrated with a VCS, deployment archives can be created from the current ODI repository.

See also, Types of Deployment Archives.

Types of Deployment Archives

You can create the following types of deployment archives in ODI:
  • Initial Deployment Archives

    Initial deployment archives contain all the ODI objects that are necessary to initialize an ODI repository. You can create an initial deployment archive and use it to deploy an ODI repository in an environment where the ODI objects are not modified, for example, in a testing or a production environment.

  • Patch Deployment Archives

    Patch deployment archives contain only the ODI objects that need to be updated in an ODI repository. You can create a patch deployment archive and use it to update an ODI repository that is already deployed. For example, when you update any ODI objects in a development environment, the updates can be applied in a testing or a production environment using a patch deployment archive.



Oracle Data Integrator 12c Architecture.


Answer:-









Oracle Data Integrator 12c Architecture





1) ODI Repositories:-



The central component of the architecture is the Oracle Data Integrator Repository. It stores configuration information about the IT infrastructure, metadata of all applications, projects, scenarios, and the execution logs. The architecture of the repository is designed to allow several separated environments that exchange metadata and scenarios (for example: Development, Test, Maintenance and Production environments). The repository also acts as a version control system where objects are archived and assigned a version number.











The ODI Repository is composed of one Master Repository and several Work Repositories. Objects developed or configured through the user interfaces are stored in one of these repository types.





Master Repository:



There is usually only one master repository that stores the following information:



Security information including users, profiles and rights for the ODI platform

Topology information including technologies, server definitions, schemas, contexts, languages and so forth.

Versioned and archived objects.





Work Repository:



The work repository is the one that contains actual developed objects. Several work repositories may coexist in the same ODI installation (for example, to have separate environments or to match a particular versioning life cycle). A Work Repository stores information for:



Models, including schema definition, data stores structures and metadata, fields and columns definitions, data quality constraints, cross references, data lineage and so forth.





Projects, including business rules, packages, procedures, folders, Knowledge Modules, variables and so forth.



Scenario execution, including scenarios, scheduling information and logs.





When the Work Repository contains only the execution information (typically for production purposes), it is then called an Execution Repository.



2) Graphical User Interfaces:-









The Graphical User Interface is called as ODI Studio. The ODI Studio is used to access the Master and Work Repositories. The various tools/components (which will be discussed below) within the ODI Studio help in administering the infrastructure, developing projects, scheduling and monitoring executions.



ODI provides 4 tools to manage different aspects and steps of an ODI project:






  • Designer

  • Operator

  • Topology Manager

  • Security Manager




A. Designer





The Designer Navigator is the component of ODI where the most metadata of a project will be defined. It is used for designing ODI metadata and mapping objects. Some of the metadata components defined in a designer are as follows:



Models: Models are basically the source or target definitions in an integration project. ODI supports models on various technologies; some of them are Oracle, DB2, Teradata, XML, Flat Files, Web Services, etc.



Projects: Projects are the components that hold all the loading and transformation rules either for a functional module or an entire enterprise data warehouse. Some of the components in the projects are interfaces, procedures and packages.





B. Operator





In the operator navigator, you can monitor the execution of interfaces, packages, scenarios or load plans. The step by step session monitoring helps in performing debugging as well.



C. Topology Manager



The topology manager is used to describe the logical and physical architecture of the information system. The topology manager reads and writes only to the master repository as it maintains the technologies, data servers, schemas, contexts and other related information for each of the physical environments. This enables ODI to execute the same integration interfaces across different physical environments.



D. Security Manager



The security manager as called is used for managing security in ODI. Users and Profiles can be created here and privileges can be assigned to these users or profiles. The security manager metadata that is defined will be stored in the master repository.





3) Run-Time Agents:-







At runtime, the Agent coordinates the execution of the ODI sessions. It retrieves the code stored in the ODI repository, connects to the various source and target systems and orchestrates the overall data integration process. There are three types of Agents in Oracle Data Integrator 12c:



• Standalone Agents can be installed on the source or target systems and require a Java Virtual Machine.



• Colocated Standalone Agents can be installed on the source or target systems as well. They can be managed using Oracle Enterprise Manager and must be configured with an Oracle WebLogic domain. Colocated Standalone Agents can run on a separate machine from the Oracle WebLogic Administration Server.



• Java Enterprise Edition (Java EE) Agents are deployed on Oracle WebLogic Server and can benefit from the application server layer features such as clustering for High Availability requirements. Java EE Agents can be managed using Oracle Enterprise Manager.



4) ODI SDK:-



The ODI 12c SDK provides a mechanism to accelerate data integration development using patterns and the APIs in the SDK.

Oracle Data Integrator also provides a Java API for performing all these run-time and design-time operations. This Oracle Data Integrator Software Development Kit (SDK) is available for standalone Java applications and application servers.





5) ODI Console:-





The ODI Console is a web-based user interface (UI) where business users, developers, administrators and operators can have read access to the repository. These business users can also perform topology configuration and production operations.



Deployed on Oracle Weblogic Server.

Plug-in available to integrate with the Oracle Fusion Middleware Control Console.