Quantcast
Channel: Archives des Oracle Enterprise Manager - dbi Blog
Viewing all 56 articles
Browse latest View live

Why Perl is my choice for scripting

$
0
0

I have been advising to use Perl for a long time in order to automate Oracle processes and operations. This week however, I tried for once to write a small procedure on a simple Linux shell (ksh and bash). This posting focuses on the shell internals and “nightmares” more than on Oracle related issues.

The goal of this procedure was to “send” a “relocate” to an Oracle Management Server after a failover at the target database. For this purpose, I had to retrieve some information about the Grid Control 11g configuration – OMS host, agent URL, etc.

In order to retrieve the OMS and AGENT URLs, I used the “emctl status agent” command:

oracle@server1.company.com:/u00/ [AGENT11G] emctl status agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 11.1.0.1.0
OMS Version       : 11.1.0.1.0
Protocol Version  : 11.1.0.0.0
Agent Home        : /u00/app/oracle/product/agent/agent11g
Agent binaries    : /u00/app/oracle/product/agent/agent11g
Agent Process ID  : 26194
Parent Process ID : 5863
Agent URL         : https://server1.company.com:3872/emd/main/
Repository URL    : https://oemgrid.company.com:4900/em/upload
Started at        : 2011-10-30 02:18:43
Started by user   : oracle
Last Reload       : 2012-01-30 11:52:55
Last successful upload                       : 2012-02-06 12:08:06
Total Megabytes of XML files uploaded so far :  7760.72
Number of XML files pending upload           :        0
Size of XML files pending upload(MB)         :     0.00
Available disk space on upload filesystem    :    60.78%
Last successful heartbeat to OMS             : 2012-02-06 12:18:03
---------------------------------------------------------------
Agent is Running and Ready

For this purpose I wrote a small loop in order to get all the lines in a “shell table”. This offered me the possibility to scan the table afterwards and work on the required variables.

i=0
emctl status agent | grep URL | awk '{print $4}' | while read line
do
var[$i]=$line
let "i = $i + 1"
done
echo ${var[0]}
echo ${var[1]}

The output is, as expected:

URL of the Agent:
oracle@server1.company.com:/u00/ [AGENT11G] echo ${var[0]}
https://server1.company.com:3872/emd/main/

URL of the Oracle Management Server (OMS):
oracle@server1.company.com:/u00/ [AGENT11G] echo ${var[1]}
https://doemgrid.company.com:4900/em/upload

The used shell was Korn Shel on a Red Hat Linux:

oracle@server1.company.com:/u00/ [AGENT11G] rpm -qa | grep ksh
ksh-20100202-1.el5

oracle@server1.company.com:/u00/ [AGENT11G] cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)

Unfortunately while running exactly the same code on the following platform (still Korn shell and still Red Hat , but a bit “older”) :

oracle@server2.mycompany.com:~/ [AGENT11G] cat /etc/redhat-release
Red Hat Enterprise Linux AS release 4 (Nahant Update 8)

oracle@server2.mycompany.com:~/ [AGENT11G] rpm -qa | grep ksh
pdksh-5.2.14-30.6

The shell table “var” was not available after the execution of the loop (!):

oracle@server2.mycompany.com:~/ [AGENT11G] echo ${var[0]}

oracle@server2.mycompany.com:~/ [AGENT11G] echo ${var[1]}

It is also worth to mention that, unfortunately, this piece of code does not work on bash shells.

This confirmed my preference of Perl for these kinds of operations and automations. The several Linux/UNIX shells definitively do behave in quite a different way.

 

Cet article Why Perl is my choice for scripting est apparu en premier sur Blog dbi services.


Oracle Enterprise Manager 12c: Administration Groups & Template Collections

$
0
0

This post describes the implementation of the Administration Groups and Template Collections with Oracle Enterprise Manager Cloud Control 12c. Administration Groups (improvement of Grid 10g Target Groups) and Template Collections improve the management of targets and ease the implementation of the monitoring strategy.

Administration Groups allow to organize targets according to several criteria. For instance the Lifecycle status of the target (prod, dev, integration), the location of the target (its city or country), aso … The main goal is to apply common settings (i.e monitoring settings) to targets having the same purpose.

The first point about administration groups is to define a clear concept and the hierarchy between these several criteria. The administration groups must be organized in a convenient way according to the company organization. In the next chapter we will present how to handle the default hierarchy existing in Cloud Control 12c in order to change it in an administration group organization fitting to the company structure.

The target property is used to implement automatically the target in the appropriate administration group.

An example

In the following example, based on the existing administration groups we will create the following hierarchy which matches to the company organization:

Basically the company has two groups of systems, one for managing the sales and one for managing the costs. The “systems” can be any kind of targets: databases, Weblogic servers, hosts, etc. Each of these systems exist in PRODUCTION and DEVELOPMENT lifecycle status. It has been decided to group all production targets together and then separate the targets in underlying groups according to their “role”, COST or SALES, of course the same strategy has been followed for DEV.

Let’s see how we can realize this.

To create an Administration Group, you have to choose Setup → Add target → Add Administration Groups.

First of all, we have to select the “primary” criteria to sort the targets in the administration group hierarchy. In our case we use the “Lifecycle status” hierarchy level as root :

By default, the following hierarchy is created:

The different hierarchy levels are:

We can merge “Mission Critical” and “Production”, and on an other side we can merge “Test Staging” and “Development” we keep the control keypad and we select Development, Staging and Test , then we select Merge:

We have to perform the same operations for “Production” and “Mission Critical”. At the end, we have the following hierarchy:

We can modify the name of the Deve-Grp to DEV and make the same for the MC_GRP to PROD:

And finally we obtain:

Then, we can add a new hierarchy level by choosing Line of business, for example:

We now have the following organization for example with Costs and Sales:

Like in the previous stage we can change the name of the sub-groups:

Do not forget to click on the create button and the administration Group is created:

The next step is about Template Collections. Template collections is an extension of the existing monitoring templates (available since Grid Control 10g). Template collections allow to group several templates for several kinds of targets in one entity which can be “applied” to an Administration Group. To create a template collection we choose the Setup Menu → Administration Group and we select “Template Collections”:

The classical database, listener and host templates have already been created, we choose “Add” to create a Collection Template which consists of a set of these templates:

We have to associate the monitoring templates to the Admin Groups. We select “Association” and we select the DEV_GROUP , we choose Associate Template Collection and we associate the DEV_GRP to the DEV monitoring template:

We associate the PROD monitoring templates to the PROD_GRP. By doing this the sub group get the properties of their parent group. We can check for example the monitoring setting of the DEV_SALES_GROUP.  We select this sub-group and we select “View Aggregate Settings”:

Now, we have to associate the different targets to the groups. Let’s check how to realize this operation. For a database we select a Database and choose Oracle Database → Target Setup → Properties:

We edit the life cycle status to Production:

Another solution is to use emcli on the OMS server to modify the target properties in case of huge number of targets. At first, we setup emcli on the OMS server:

racle@eurorclg02:/u01/app/oracle/Middleware/oms/bin/ [oms12c] ./emcli setup -url=https://vmtestoraem12c.it.dbi-services.com:4900/em/ -username=sysman
Oracle Enterprise Manager Cloud Control 12c Release 12.1.0.1.0.
Copyright (c) 1996, 2011 Oracle Corporation and/or its affiliates. A
ll rights reserved.
Enter password
Do you trust the certificate chain? [yes/no] yes
Emcli setup successful

We have to use the set_target_property_value to update the property for a target Watch out most of those arguments are case sensitive ! The official syntax is:

emcli set_target_property_value
-property_records="target_name:target_type:property_name:property_value"

If we need to update the property LifeCycle Status to Production for the DB112 database, we have to run:

oracle@eurorclg02:/u01/app/oracle/Middleware/oms/bin/ [oms12c] emcli set_target_property_value -property_records="DB112:oracle_database:LifeCycle Status:Production"
Properties updated successfully

If we now select the PROD_SALES_GROUP, let’s see if we have targets defined on it. We select the PROD_SALES_GROUP and we select Go To Group Home Page:

We can see the DB112 target database, however, this database is not synchronized. We have to select “Start Synchronization” to apply the templates right now, it takes a little time but finally the target gets synchronized:

We make the same operations for the other groups and finally, our Administration Group Infrastructure is ready !

Limitations

Hierarchy levels organization:

The hierarchy levels are “redundant”, which means that each level contains the same sub-levels.

In the following example : http://docs.oracle.com/cd/E24628_01/doc.121/e24473/administration_group.htm#autoId5 “Prod” and “Staging” have the same locations and each location has the same “line of businesses”. We might imagine systems with different “life cycle status” (i.e PROD/TEST) located at different place (i.e PROD at ZH/BSL for PROD and test at GVA/LS for TEST). However this does not seem to be possible with the current implementation.

Hierarchy maintenance:

To modify the hierarchy , the whole hierarchy is dropped and removed. This might also be a source of loosing some settings, therefore the concept must be correctly defined from scratch.

Definition of some monitoring settings (i.e monitoring configuration) :

It is not possible to setup the monitoring configuration (i.e dbsnmp username and password) for an administration group. Furthermore it is not possible to set a “generic” connect string like a SCAN IP in a cluster for instance. This feature would prevent to configure individually each target.

Conclusion

The use of Administration Groups and Template collections eases the automation of monitoring different kind of targets. The main advantage is the automatic registration of the target to a specific administration group as soon as a property is assigned to a target. This is quite interesting for DBAs, because they will not forget to administer targets in huge data centers, for example.

 

Cet article Oracle Enterprise Manager 12c: Administration Groups & Template Collections est apparu en premier sur Blog dbi services.

Oracle Enterprise Manager 12c: creation and management of administrators through emcli

$
0
0

Cloud Control 12c (and the former Grid Control 11g) offers the possibility to create administrators and manage their privileges through the “emcli” command line utility. The main advantage of this method (based on scripts) is to be able to reproduce the creation of the users as soon as a new Cloud Control infrastructure must be built up (for instance in order to migrate Grid Control 11g on Windows to Cloud Control 12c on Linux).

Indeed, whereas some objects like the monitoring templates can be easily exported and imported, there is no possibility to export and import the Grid/Cloud Control users.

Creating these users through scripts thus offers the advantage of being able to reproduce their creation on a new environment.

To get a complete help of the “emcli create_user” command, use the following statement:

# emcli help create_user
  emcli create_user
        -name="name"
        -password="password"
        [-type="type of user"]
        [-roles="role1;role2;..."]
        [-email="email1;email2;..."]
        [-privilege="name[;secure-resource-details]]"
        [-separator=privilege="sep_string"]
        [-subseparator=privilege="subsep_string"]
        [-profile="profile_name"]
        [-desc="user_description"]
        [-expired="true/false"]
        [-prevent_change_password="true/false"]
        [-department="department_name"]
        [-cost_center="cost_center"]
        [-line_of_business="line_of_business"]
        [-contact="contact"]
        [-location="location"]
        [-input_file="arg_name:file_path"]

The name and password of the user are mandatory parameters. Beside these parameters the other important settings for a Grid Control user are of course its privileges and access rights.

Concerning the privilege management Cloud Control 12c distinguishes between three main groups of privileges:

  • privileges concerning Jobs
  • privileges concerning Targets
  • System privileges

To get details about these privileges, use the following commands (once connected to CC 12c with “emcli login -username=”):

oracle@chhs-sora011:/home/oracle [oms12c] emcli get_supported_privileges -type=SYSTEM

As an example we will create a simple user having access to a particular database (The Enterprise Manager repository database):

emcli create_user -name=”useryann” -password=”useryann” -privilege=”view_target;EMREP12_SITE1.domain.ch:oracle_database”User “USERYANN” created successfully

To extend a user in order to provide aditionnal privileges, the modify_user command can be used (be careful the existing privileges must be specified during the modification, if not they will be lost):

emcli modify_user -name=”useryann” -privilege=”view_target;EMREP12_SITE1.domain.ch:oracle_database”
-privilege=”CONNECT_TARGET;EMREP12_SITE1.domain.ch:oracle_database”
User “USERYANN” modified successfully

The “connect_target” privilege allows to access the performance view of the database target, supposing the user also knows a database user credential to access it.

Drawback of the emcli/script-based method

Of course if Oracle changes/adds/removes some privileges in Cloud Control 12c, the script won’t be accurate anymore and must be adapted for the new releases of the Cloud Control infrastructure. This will however take less time than re-create all users through the Graphical User Interface.

Since Cloud Control 12c , the system privileges granularity is much more dense, more than 75 system privileges are available compared to the 11 system privileges in Grid Control 11g.

Details of the system privileges are available under:

http://docs.oracle.com/cd/E25178_01/doc.1111/e24473.pdf

In order to check the current privileges of a Cloud Control 12c administrator, emcli does not provide any command (or Verb), therefore the only possibility is to access the repository as Repository Owner (SYSMAN) and start the following select:

set lines 132
set pages 999

col GRANTEE format a20
col PRIV_NAME format a25
col TARGET_NAME format a40
col TARGET_TYPE format a25

select grantee, PRIV_NAME, TARGET_NAME, TARGET_TYPE
from MGMT_PRIV_GRANTS pg, MGMT_TARGETS mt
where pg.GUID = mt.TARGET_GUID
and grantee = ‘USERYANN’
/

Below some information about the available Cloud Control 12c privileges. List the supported privileges for Jobs management:

# emcli get_supported_privileges -type=JOB
 Privilege Name  Privilege Scope  Security Class  Resource Guid Column  Resource Id Columns
 MANAGE_JOB      Resource         JOB             JOB_ID
 GRANT_VIEW_JOB  Resource Type    JOB
 FULL_JOB        Resource         JOB             JOB_ID
 CREATE_JOB      Resource Type    JOB
 VIEW_JOB        Resource         JOB             JOB_ID

List of supported privileges for Targets:

# emcli get_supported_privileges -type=SYSTEM
Privilege Name                  Privilege Scope  Security Class           Resource Guid Column  Resource Id Columns
 MANAGE_PRIV_ANY_PATCH_PLAN      Resource Type    PATCH
 CREATE_PLAN_TEMPLATE            Resource Type    PATCH
 PATCH_SETUP                     Resource Type    PATCH
 CREATE_PATCH_PLAN               Resource Type    PATCH
 VIEW_ANY_PATCH_PLAN             Resource Type    PATCH
 FULL_ANY_PATCH_PLAN             Resource Type    PATCH
 CREATE_BUSINESS_RULESET         Resource Type    RULESET_SEC
 SWLIB_EXPORT                    Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_EDIT_ANY_ENTITY           Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_MANAGE_ANY_ENTITY         Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_IMPORT                    Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_CREATE_ANY_ENTITY         Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_VIEW_ANY_ENTITY           Resource Type    SWLIB_ENTITY_MGMT
 SWLIB_GRANT_ANY_ENTITY_PRIV     Resource Type    SWLIB_ENTITY_MGMT
 GRANT_VIEW_JOB                  Resource Type    JOB
 CREATE_JOB                      Resource Type    JOB
 VIEW_ANY_TC                     Resource Type    TEMPLATECOLLECTION
 CREATE_TC                       Resource Type    TEMPLATECOLLECTION
 CREATE_OBJECT                   Resource Type    FMW_DIAG_SEC_CLASS
 VIEW_OBJECT                     Resource Type    FMW_DIAG_SEC_CLASS
 BTM_USER                        Resource Type    BTM
 BTM_ADMINISTRATOR               Resource Type    BTM
 SWLIB_STORAGE_ADMIN             Resource Type    SWLIB_ADMINISTRATION
 PUBLISH_REPORT                  Resource Type    REPORT_DEF
 VIEW_BA_MENU_ITEM               Resource Type    APM
 VIEW_APM_PAYLOAD                Resource Type    APM
 ACCESS_APM_SESSION_DIAG         Resource Type    APM
 ASSOCIATE_APM_ENTITIES          Resource Type    APM
 IMPORT_DP                       Resource Type    DP
 CREATE_DP                       Resource Type    DP
 GRANT_FULL_DP                   Resource Type    DP
 GRANT_LAUNCH_DP                 Resource Type    DP
 OPERATOR_ANY_TARGET             Resource Type    TARGET
 PERFORM_OPERATION_ANYWHERE      Resource Type    TARGET
 FULL_ANY_TARGET                 Resource Type    TARGET
 PUT_FILE_AS_ANY_AGENT           Resource Type    TARGET
 PERFORM_OPERATION_AS_ANY_AGENT  Resource Type    TARGET
 CREATE_TARGET                   Resource Type    TARGET
 CONNECT_ANY_VIEW_TARGET         Resource Type    TARGET
 CREATE_PROPAGATING_GROUP        Resource Type    TARGET
 VIEW_ANY_TARGET                 Resource Type    TARGET
 USE_ANY_BEACON                  Resource Type    TARGET
 EM_MONITOR                      Resource Type    TARGET
 CREATE_BACKUP_CONFIG            Resource Type    SBRM_BACKUP_CONFIG
 CREATE_MEXT                     Resource Type    MEXT_SECURE_CLASS
 FULL_ANY_CCS                    Resource Type    CCS_SECURE_CLASS
 FULL_OWNED_CCS                  Resource Type    CCS_SECURE_CLASS
 CREATE_CREDENTIAL               Resource Type    NAMED_CREDENTIALS
 SUPER_USER                      Resource Type    SYSTEM
 VIEW_ANY_TEMPLATE               Resource Type    TEMPLATE
 VIEW_ANY_SELFUPDATE             Resource Type    SELFUPDATE_SECURE_CLASS
 SELFUPDATE_ADMINISTRATOR        Resource Type    SELFUPDATE_SECURE_CLASS
 VIEW_ANY_DISC_TARGETS_ON_HOST   Resource Type    DISCOVERY
 VIEW_ANY_DISCOVERED_HOSTS       Resource Type    DISCOVERY
 CAN_SCAN_NETWORK_PRIVILEGE      Resource Type    DISCOVERY
 AD4J_ADMINISTRATOR              Resource Type    AD4J
 AD4J_USER                       Resource Type    AD4J
 JVMD_VIEW_LOCALS_PRIV           Resource Type    AD4J
 ACCESS_EM                       Resource Type    ACCESS
 PLUGIN_AGENT_ADMINISTRATOR      Resource Type    PLUGIN
 PLUGIN_OMS_ADMINISTRATOR        Resource Type    PLUGIN
 PLUGIN_VIEW                     Resource Type    PLUGIN
 ASREPLAY_VIEWER                 Resource Type    ASREPLAY_ENTITY_MGMT
 ASREPLAY_OPERATOR               Resource Type    ASREPLAY_ENTITY_MGMT
 MANAGE_ANY_CHANGE_PLAN          Resource Type    CHANGE_PLAN
 VIEW_ANY_OMS_PROPERTY           Resource Type    OMS_PROP_SECURE_CLASS
 MANAGE_ANY_OMS_PROPERTY         Resource Type    OMS_PROP_SECURE_CLASS
 CREATE_ANY_POLICY               Resource Type    CLOUDPOLICY
 VIEW_ANY_POLICY                 Resource Type    CLOUDPOLICY
 SVCD_CREATE_DASH                Resource Type    SVCD
 EMHA_ADMINISTRATION             Resource Type    EMHA_SECURE_CLASS
 VIEW_ANY_COMPLIANCE_FWK         Resource Type    COMPLIANCE_FWK
 CREATE_COMPLIANCE_ENTITY        Resource Type    COMPLIANCE_FWK
 FULL_ANY_COMPLIANCE_ENTITY      Resource Type    COMPLIANCE_FWK
 VIEW_ANY_POLICY_GROUP           Resource Type    CLOUDPOLICYGROUP
 CREATE_POLICY_GROUP             Resource Type    CLOUDPOLICYGROUP
 

Cet article Oracle Enterprise Manager 12c: creation and management of administrators through emcli est apparu en premier sur Blog dbi services.

Oracle Database 12c: Oracle OEM express

$
0
0

Together with Oracle Database 12c, Oracle has introduced a new administration console named Oracle Enterprise Manager Express (Oracle OEM Express). This “light” version of Enterprise Manager Database Console – which is not supported in Oracle 12c anymore – is a management product built into Oracle Database 12c.

There is no installation or management required, the database creation with dbca enables OEM Express and gives you the URL connect string.
As there is no mid-tier or middleware components in OEM Express, the performance overhead with the database server is not important.
The URL connection is: http://db_hostname:port/em/

OEM Express: the features

OEM Express only features the basic administration pages of Enterprise Manager Cloud 12c:

Configuration:

  • Initialization parameters
  • Memory
  • Database feature
  • Database properties

Storage:

  • Tablespace
  • Undo
  • Redo
  • Archive log
  • Control files

Performance hub:

  • Real-time performance monitoring and tuning
  • Historical performance and tuning
  • SQL monitoring
  • ADDM
  • Active Session History (ASH) analytics
  • Automatic SQL Tuning Advisor and SQL Tuning Advisor

Thus, OEM Express allows you to manage the basic features of an Oracle database: user security, database memory, and storage. You can also access real time performance charts.
OEM Express is easy to handle and the response times are very correct.
The database homepage looks like the Cloud 12c homepage with its performance, resource, and sql monitoring pages:

em1

The storage page displays the main properties of the tablespaces and datafiles (free space, datafile, etc.):

em2

The user page displays the different user properties (account status, creation date, expiration date, etc.):

em3

The performance hub page shows a lot of useful information (memory, I/O, monitored SQL, ADDM):

em4

Conclusion

OEM Express might be a good solution if you do not have 100 databases to administrate. In this last case, you should use Oracle Enterprise Manager Cloud Control 12c.
With OEM express, Oracle has replaced the old resource-consuming Oracle Enterprise Manager Database Console by a new, easy-to-handle, and nicely performing product.
One of the main advantages is that you do not need to install the product: OEM Express is built into the database, but the performance impact is really not important.

 

Cet article Oracle Database 12c: Oracle OEM express est apparu en premier sur Blog dbi services.

Oracle Open World 2013: Day four – Oracle WebLogic Server management

$
0
0

Today, I went to a session on Oracle WebLogic Server management. It was about the challenges of the application infrastructure to deliver a high quality of services in the cloud. The main goal is to optimize production and efficiency through automated management and administration. The speakers were David Cabetlus (Senior Principal Product Manager Oracle) and Glen Hawkins (Senior Director of Product Management Oracle). The session had two parts: Oracle WebLogic Server and Oracle Enterprise Manager Cloud Control 12cR3

Oracle WebLogic Server

This part focused on the WebLogic management tools for configuration, operations, and administration.

The following tools have been explained: WLST, JMX, Administration Console, REST API, and Mbean trees. WLDF has also been discussed. Let me add that WLST is specifically designed and optimized during all new version in terms of automation, standardization, and repeatability.

Do not forget the Oracle Enterprise pack for eclipse with is a specific plug-in for WLS management feature (WLST script coding, MBean Navigators and aso).

The WebLogic diagnostic framework (WLDF) features improved monitoring (logs, runtime metrics and instrumentation) and automation.
WLDF is capable of processing, capturing, exposing and retrieving information on WebLogic domains.

It includes a dashboard, watch rules, and notifications to easily diagnose WebLogic domains.

Cloud elasticity with dynamic, declaratively configured clusters was also discussed. The goal is to enable scalability for the cloud.

Unified management framework based on WLS management framework was also talked about, covering the management of the application server (WebLogic) and the data grid infrastructure (Coherence).

An other improvement is the Maven integration, which enables automation and production for building and developing WebLogic applications with a remote repository (jar and poms).

Oracle Enterprise Manager Cloud Control 12cR3

The full title of the presentation was “Taking to the next level with Oracle Enterprise Manager Cloud Control 12cR3″.

It focused on the following innovations:

  • Complete cloud life cycle management
  • Integrated cloud stack management
  • Business Driven application management

The complete cloud life cycle management describes the cycle plan to optimize passing through the setup, building, testing, deploying, monitoring, managing (meter & charge) of the Oracle Cloud stack. It’s now completely managed by Oracle Enterprise Manager.

Oracle Cloud Stack

 

OOW20130925_1_001

Integrated cloud stack management

Here is a summary of the innovations concerning OEM “Total Cloud Control” 12cR3:

  • Performance monitoring for availability and performance issues with the following features

– Visibility across the cloud stack
– Real time & hierarchical monitoring
– rectify automatically problem via already implemented WLST scripts.

  • Diagnose JVM for WebLogic and Java applications

– Complete visibility of the JVM stack
– Heap and threads, identify ECID in running thread
– Already deployed on production with no restart needed
– Middleware diagnostic advisor
– Root cause analysis (evaluate and provide guided help)
– Diagnose JMS, JDBC properties aso

  • Patching and provisioning

– Reduce effort, time and eliminate error
– automation and tracking
– Pre-validation

Oracle Enterprise Management is also designed to manage all Cloud Application Foundation products.

Business driven application management

Features have been added to manage Business Applications provided by Oracle, such as SOA, Webcenter, and User Experience Management from a business perspective (SLA, status, infra health). All business transactions (BT) are now tracked to monitor the user frequency usage. Middleware and Database request and diagnostic are also tracked for these business application.

After this session, I begin to understand why they are calling it OEM “Total Cloud Control”.

 

Cet article Oracle Open World 2013: Day four – Oracle WebLogic Server management est apparu en premier sur Blog dbi services.

Oracle EM agent 12c thread leak on RAC

$
0
0

In a previous post about nproc limit, I wrote that I had to investigate the nproc limit with the number of threads because my Oracle 12c EM agent was having thousands of threads. This post is a short feedback about this issue and the way I have found the root cause. It concerns the enterprise manager agent 12c on Grid Infrasctructure >= 11.2.0.2

NLWP

The issue was:

ps -o nlwp,pid,lwp,args -u oracle | sort -n
NLWP   PID   LWP COMMAND
   1  8444  8444 oracleOPRODP3 (LOCAL=NO)
   1  9397  9397 oracleOPRODP3 (LOCAL=NO)
   1  9542  9542 oracleOPRODP3 (LOCAL=NO)
   1  9803  9803 /u00/app/oracle/product/agent12c/core/12.1.0.3.0/perl/bin/perl /u00/app/oracle/product/agent12c/core/12.1.0.3.0/bin/emwd.pl agent /u00/app/oracle/product/agent12c/agent_inst/sysman/log/emagent.nohup
  19 11966 11966 /u00/app/11.2.0/grid/bin/oraagent.bin
1114  9963  9963 /u00/app/oracle/product/agent12c/core/12.1.0.3.0/jdk/bin/java ... emagentSDK.jar oracle.sysman.gcagent.tmmain.TMMain

By default ps has only one entry per process, but each processes can have several threads – implemented on linux as light-weight process (LWP). Here, the NLWP column shows that I have 1114 threads for my EM 12c agent – and it was increasing every day until it reached the limit and the node failed (‘Resource temporarily unavailable’).

The first thing to do is to know what those threads are. The ps entries do not have a lot of information, but I discovered jstack which every java developer should know, I presume. You probably know that java has very verbose (lengthy) stack traces. Jstack was able to show me thousands of them in only one command:

Jstack

$ jstack 9963
2014-06-03 13:29:04
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.14-b01 mixed mode):

"Attach Listener" daemon prio=10 tid=0x00007f3368002000 nid=0x4c9b waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

"CRSeOns" prio=10 tid=0x00007f32c80b6800 nid=0x3863 in Object.wait() [0x00007f31fe11f000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	at oracle.eons.impl.NotificationQueue.internalDequeue(NotificationQueue.java:278)
	- locked  (a java.lang.Object)
	at oracle.eons.impl.NotificationQueue.dequeue(NotificationQueue.java:255)
	at oracle.eons.proxy.impl.client.base.SubscriberImpl.receive(SubscriberImpl.java:98)
	at oracle.eons.proxy.impl.client.base.SubscriberImpl.receive(SubscriberImpl.java:79)
	at oracle.eons.proxy.impl.client.ProxySubscriber.receive(ProxySubscriber.java:29)
	at oracle.sysman.db.receivelet.eons.EonsMetric.beginSubscription(EonsMetric.java:872)
	at oracle.sysman.db.receivelet.eons.EonsMetricWlm.run(EonsMetricWlm.java:139)
	at oracle.sysman.gcagent.target.interaction.execution.ReceiveletInteractionMgr$3$1.run(ReceiveletInteractionMgr.java:1401)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at oracle.sysman.gcagent.util.system.GCAThread$RunnableWrapper.run(GCAThread.java:184)
	at java.lang.Thread.run(Thread.java:662)
...

CRSeOns

I don’t paste all of them here. We have the ‘main’, we have a few GCs and ‘Gang workers’ which are present in all JVMs and we have a few enterprise manager threads. And what was interesting was that I had thousands of “CRSeOns” that seemed to be increasing.

Some guesses: I’m on RAC, and I have a ‘ons’ resource and the EM agent tries to subscribe to it. Goggle search returned nothing, and that’s the reason I put that in a blog post now. Then I searched MOS, and bingo, there is a note: Doc ID 1486626.1. It has nothing to do with my issue, but has an interesting comment in it:

In cluster version 11.2.0.2 and higher, the ora.eons resource functionality has been moved to EVM. Because of this the ora.eons resource no longer exists or is controlled by crsctl.

It also explains how to disable EM agent subscription:

emctl setproperty agent -name disableEonsRcvlet -value true

I’m in 11.2.0.3 and I have thousands of threads related to a functionality that doesn’t exist anymore. And that leads to some failures in my 4 nodes cluster.

The solution was simple: disable it.

For a long time I have seen a lot of memory leaks or CPU usage leaks related to the enterprise manager agent. With this new issue, I discovered a thread leak and I also faced a SR leak when trying to get support for the ‘Resource temporarily unavailable’ error, going back and forth between OS, Database, Cluster and EM support teams…

 

Cet article Oracle EM agent 12c thread leak on RAC est apparu en premier sur Blog dbi services.

Oracle OEM Cloud Control 12c upgrade to 12.1.0.4

$
0
0

In this blog post, I will describe how to upgrade from Oracle Enterprise Manager Cloud Control 12.1.0.3 to OEM 12.1.0.4.0. I have already described the main new features of Cloud Control 12.1.0.4 version in an earlier post (Oracle OEM Cloud Control 12.1.0.4 – the new features). The first pre-requisite is to apply the patch 11061801 on the repository database in 11.2.0.3 version, using the classical opatch apply method. Then, we can begin the upgrade phase.

First, we should explicitly stop the OMS jvmd and adp engines:

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms jvmd stop -allOracle Enterprise Manager Cloud Control 12c Release 3Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.Please enter the SYSMAN password:Stopping all Engines{}
No engines found for this operation
oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl extended oms adp stop -a
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation. All rights reserved.
No valid registry entry found for verb jv

Then we stop the OMS:

oracle@vmtestoraem12c:/home/oracle/ [oms12c] emctl stop oms -all
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation.
All rights reserved.Stopping WebTier...
WebTier Successfully Stopped
Stopping Oracle Management Server...
Oracle Management Server Successfully Stopped
AdminServer Successfully Stopped
Oracle Management Server is Down

We stop the management agent:

oracle@vmtestoraem12c:/home/oracle/ [agent12c] emctl stop agent
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation.
All rights reserved.
Stopping agent ..... stopped.

Finally after unzipping the 12.1.0.4 binary files, we can run the installer:

cc1

We choose not to receive security updates:

cc2

cc3

We choose to skip the updates:

cc4

All the prerequisites checks have succeeded :=)

cc5

We select a One System Upgrade and the Oracle_Home where the 12.1.0.3 version is installed:

cc7

We select the new Middleware Home:

cc8

We enter the administration passwords:

cc9

The installer reminds you that you have correctly patched the repository database. Let’s check if it is correct:

Interim patches (1) :
Patch 11061801 : applied on Mon Aug 04 16:52:51 CEST 2014
Unique Patch ID: 16493357
Created on 24 Jun 2013, 23:28:20 hrs PST8PDT
Bugs fixed: 11061801

 

cc10

We did not copy the emkey to the repository, so we have to run:

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12103/oms/bin/ [oms12c] emctl config emkey -copy_to_repos_from_file -repos_conndesc '"(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=vmtestoraem12c.it.dbi-services.com)(PORT=1521)))(CONNECT_DATA=(SID=OMSREP)))"' -repos_user sysman -emkey_file /u01/app/oracle/MiddleWare_12103/oms/sysman/config/emkey.ora
Oracle Enterprise Manager Cloud Control 12c Release 3
Copyright (c) 1996, 2013 Oracle Corporation.
All rights reserved.
Enter Admin User's Password :
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey has been copied to the Management Repository.
This operation will cause the EMKey to become unsecure.

After the required operation has been completed, secure the EMKey by running “emctl config emkey -remove_from_repos”:

cc11

We select Yes to let the installer fix the isssue automatically:

cc12

We select Next:

cc13

We can select additionnal plugins:

cc14

We enter the weblogic password:

cc15

We select install:

cc16

And finally we run the allroot.sh script connected as root:

cc17

The upgrade is successfull! Let’s check the OMs status:

oracle@vmtestoraem12c:/u01/app/oracle/MiddleWare_12cR4/oms/ [oms12c] emctl status oms -details
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.
All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
Console Server Host : vmtestoraem12c.it.dbi-services.com
HTTP Console Port : 7789
HTTPS Console Port : 7801
HTTP Upload Port : 4890
HTTPS Upload Port : 4901
EM Instance Home : /u01/app/oracle/gc_inst/em/EMGC_OMS1
OMS Log Directory Location : /u01/app/oracle/gc_inst/em/EMGC_OMS1/sysman/log
OMS is not configured with SLB or virtual hostname
Agent Upload is locked.
OMS Console is locked.
Active CA ID: 1
Console URL: https://vmtestoraem12c.it.dbi-services.com:7801/em
Upload URL: https://vmtestoraem12c.it.dbi-services.com:4901/empbs/upload
WLS Domain InformationDomain Name : GCDomain
Admin Server Host : vmtestoraem12c.it.dbi-services.com
Admin Server HTTPS Port: 7102
Admin Server is RUNNING
Oracle Management Server Information
Managed Server Instance Name: EMGC_OMS1
Oracle Management Server Instance Host: vmtestoraem12c.it.dbi-services.com
WebTier is Up
Oracle Management Server is Up
BI Publisher is not configured to run on this host.

Now we have access to the Enterprise Manager Cloud Control 12.1.0.4:

cc18

The next step consists in upgrading the management agents. From the Setup menu, we select Upgrade Agents:

cc19

cc20

The management agent is detected:

cc21

The operation is successfull:

cc22

The update to 12.1.0.4 Enterprise Manager version did not cause any problem and has a new feature which checks the correct patching of the Enterprise Manager repository database.

 

Cet article Oracle OEM Cloud Control 12c upgrade to 12.1.0.4 est apparu en premier sur Blog dbi services.

Query the Enterprise Manager collected metrics

$
0
0

Enterprise Manager (Cloud Control for example) gathers a lot of metrics. You can display them from the GUI, but you can also query the SYSMAN views directly. Today, I wanted to get the history of free space in an ASM disk group for the previous week. Here is how I got it.
Enterprise Manager metrics are aggregated in MGMT_METRICS_1HOUR (granularity 1 hour, retention 1 month) and MGMT_METRICS_1DAY (granularity 1 day, retention 1 year). But the detailed collected values are kept 7 days in MGMT_METRICS_RAW. This is what I ‘ll query.
All that is in the SYSMAN schema:

SQL> alter session set current_schema=SYSMAN;

The metrics are related to a target and a metric. Let’s find them.

target

First, let’s have a look at all available the targets types in MGMT_TARGETS:

select distinct target_type,type_display_name from mgmt_targets order by 1;

TARGET_TYPE TYPE_DISPLAY_NAME
cluster Cluster
composite Group
has Oracle High Availability Service
host Host
j2ee_application Application Deployment
metadata_repository Metadata Repository
microsoft_sqlserver_database Microsoft SQL Server
oracle_apache Oracle HTTP Server
oracle_beacon Beacon
oracle_database Database Instance
oracle_dbsys Database System
oracle_em_service EM Service
oracle_emd Agent
oracle_emrep OMS and Repository
oracle_emsvrs_sys EM Servers System
oracle_home Oracle Home
oracle_ias_farm Oracle Fusion Middleware Farm
oracle_listener Listener
oracle_oms Oracle Management Service
oracle_oms_console OMS Console
oracle_oms_pbs OMS Platform
osm_cluster Cluster ASM
osm_instance Automatic Storage Management
rac_database Cluster Database
weblogic_domain Oracle WebLogic Domain
weblogic_j2eeserver Oracle WebLogic Server

I want to see ASM metrics for my RAC cluster. The display name ‘Cluster ASM’ has the internal type as ‘osm_cluster’ (yes, it was initially called Oracle Storage Management).

Then here are all the targets I have for that target type:

SQL> select target_name,target_type,target_guid from mgmt_targets where target_type='osm_cluster';
TARGET_NAME TARGET_TYPE TARGET_GUID
+ASM_xxzhorac1 osm_cluster B8A5A42E2F8F6FCF6CF9FEB082B4CD79

In SYSMAN schema, we have GUID identifiers.

metric

Then, for each target type, there is a large number of metrics referenced in MGMT_METRICS:

select distinct target_type,metric_name,metric_label,metric_column,column_label,metric_guid 
from mgmt_metrics
where target_type='osm_cluster' and metric_label like 'Disk Group%'
order by target_type,metric_name,metric_column;

 

METRIC_NAME METRIC_LABEL COLUMN_LABEL
DiskGroup_Target_Component Disk Group Target Component
DiskGroup_Target_Component Disk Group Target Component Disk Group Name
DiskGroup_Target_Component Disk Group Target Component Disk Count
DiskGroup_Usage Disk Group Usage
DiskGroup_Usage Disk Group Usage Disk Group Name
DiskGroup_Usage Disk Group Usage Disk Group Free (MB)
DiskGroup_Usage Disk Group Usage Disk Group Used %
DiskGroup_Usage Disk Group Usage Used % of Safely Usable
DiskGroup_Usage Disk Group Usage Size (MB)
DiskGroup_Usage Disk Group Usage Redundancy
DiskGroup_Usage Disk Group Usage Disk Group Usable Free (MB)
DiskGroup_Usage Disk Group Usage Disk Group Usable (MB)
asm_diskgroup Disk Groups
asm_diskgroup Disk Groups Allocation Unit Size (MB)
asm_diskgroup Disk Groups Disk Count
asm_diskgroup Disk Groups Disk Group
asm_diskgroup Disk Groups Redundancy
asm_diskgroup Disk Groups Size (GB)
asm_diskgroup Disk Groups Contains Voting Files
asm_diskgroup_attribute Disk Group Attributes
asm_diskgroup_attribute Disk Group Attributes Attribute Name
asm_diskgroup_attribute Disk Group Attributes Disk Group
asm_diskgroup_attribute Disk Group Attributes Value
diskgroup_imbalance Disk Group Imbalance Status
diskgroup_imbalance Disk Group Imbalance Status Disk Group Imbalance (%) without Rebalance
diskgroup_imbalance Disk Group Imbalance Status Disk Maximum Used (%) with Rebalance
diskgroup_imbalance Disk Group Imbalance Status Disk Minimum Free (%) without Rebalance
diskgroup_imbalance Disk Group Imbalance Status Disk Count
diskgroup_imbalance Disk Group Imbalance Status Disk Group
diskgroup_imbalance Disk Group Imbalance Status Actual Imbalance (%)
diskgroup_imbalance Disk Group Imbalance Status Actual Minimum Percent Free
diskgroup_imbalance Disk Group Imbalance Status Rebalance In Progress
diskgroup_imbalance Disk Group Imbalance Status Disk Size Variance (%)

Ok there is a lot of metrics.
If you want more information about them, just go to the Enterprise manager documentation. I’m interrseted about disk group rebalancing and documentation for Disk Group Imbalance Status metrics is here.

collected values

Now let’s put that together and join to MGMT_METRICS_RAW where I’m interested in the ‘U90′ diskgroup:

select 
 to_char(collection_timestamp,'dd-mon-yyyy') day,to_char(collection_timestamp,'hh24:mi') hour
 ,metric_label||' - '||column_label label,key_value key,value
from
(select distinct target_name,target_type,target_guid from mgmt_targets where target_type='osm_cluster')
join (
 select distinct 
  target_type,metric_name,metric_label,metric_column,column_label,short_name,metric_guid 
 from mgmt_metrics
) using(target_type)
join mgmt_metrics_raw using(target_guid,metric_guid)
where key_value = 'U90' and collection_timestamp>sysdate-8
order by collection_timestamp desc,metric_label,column_label,key_value
;
DAY HOUR LABEL KEY VALUE
02-mar-2015 15:43 Disk Group Usage – Disk Group Free (MB) U90 939137
02-mar-2015 15:43 Disk Group Usage – Disk Group Usable (MB) U90 1279980
02-mar-2015 15:43 Disk Group Usage – Disk Group Usable Free (MB) U90 939137
02-mar-2015 15:43 Disk Group Usage – Disk Group Used % U90 26.629
02-mar-2015 15:43 Disk Group Usage – Size (MB) U90 1279980
02-mar-2015 15:43 Disk Group Usage – Used % of Safely Usable U90 26.629
02-mar-2015 15:39 Disk Group Imbalance Status – Actual Imbalance (%) U90 0.164381953

I usually get the result from SQL Developer and export it as html. This is what I’ve pasted above. And it’s easy to open it with Excel and get a nice pivot chart from it:

b2ap3_thumbnail_CaptureEM-Metrics.JPG

In my case, I was interested by the available free space in my diskgroup disks during the week. A disk has been added on 24-feb 20:00 but the re-balance hanged for 24 hours. The blue area is the minimum free space (among all the disgroup disks – which have the same size) and the grey part is the size of the newly added disk that has to be re-balanced among all disks.

But the goal of this post is only to show how to get collected statistics:

  • identify the target type
  • identify the target
  • identify the metric
  • join that with the raw statistics

I need that very rarely, but it can help to analyze something that happened in the past.

 

Cet article Query the Enterprise Manager collected metrics est apparu en premier sur Blog dbi services.


dbsnmp expiring password, manually triggering metrics collections

$
0
0

When you use Enterprise Manager Cloud control 12c, the monitor username commonly used is dbsnmp. Depending on the Oracle profile used for this user, the dbsnmp password can expire, and as a consequence multiple targets will be seen in a pending status by Enterprise Manager Cloud control 12c.

An interesting way to solve this problem is to create a metric extension detecting in how many days the password will expire:

You realize the operation as follows:

dbsnmp1

You define the collection frequency to 1 day

dbsnmp2

You define the SQL request

dbsnmp3

You define the metrics columns

dbsnmp4

The metric extension defined, saved as deployable, published and applied to your database targets you will be told when the dbsnmp password will expire.

In case the dbsnmp password is about to expire, such an incident will be created:

dbsnmp7

By the way, you will notice that the incident created by the password expiry will not be evaluated before one day, even if you change the password and the incident will remain in the EM12c incident manager.

Of course you can use the Reevaluate Alert button in the incident manager page:

dbsnmp6

Nevertheless with EM12c, you have the possibility to manually trigger the metric collection by using the emctl control agent runCollection command.

For example, if you need to reevaluate the dbsnmp_expiry metric collection manually and not waiting one day, you will have to use the following command:

emctl control agent runcollection OMSREP:oracle_database ‘ME$dbsnmp_expiry’

The metric will be reevaluated and the incident will disappear.

Manually triggering the metrics can be very helpful when you administer a lot of targets. The usual case is when you receive a tablespace full alert. You increase or add a data file to correct the error, but as previously the metric collection will not be evaluated immediately and the generated incident will always be present.

The syntax of the command described in the Oracle documentation is:

emctl control agent runCollection <targetName>:<targetType> <collectionItemName>

 

Oracle’s documentation tells us to look in xml files located in $AGENT_BASE/plugins in order to find the collectionItemName:

oracle@vm12c:/u01/app/oracle/agent12cR5/plugins/ [agent12c] ls
oracle.sysman.beacon.agent.plugin_12.1.0.5.0
oracle.sysman.emrep.agent.plugin_12.1.0.5.0
oracle.sysman.db.agent.plugin_12.1.0.8.0
oracle.sysman.oh.agent.plugin_12.1.0.5.0
oracle.sysman.db.discovery.plugin_12.1.0.8.0
oracle.sysman.oh.discovery.plugin_12.1.0.5.0
oracle.sysman.emas.agent.plugin_12.1.0.8.0
oracle.sysman.xa.discovery.plugin_12.1.0.6.0
oracle.sysman.emas.discovery.plugin_12.1.0.8.0

 

In our case we have a look in oracle.sysman.db.agent.plugin_12.1.0.8.0/default_collections and have a look at the database.xmlp file, you have to search Tablespaces Full:

======================================================================
== Category: Tablespaces Full - 10i - locally managed - not CDB
======================================================================
-->
<CollectionItem NAME="problemTbsp_10i_Loc">
<ValidIf>
<CategoryProp NAME="VersionCategory" CHOICES="10gR2;10gR203;10gR204;10gR205;11gR1;11gR2;11gR202;12c"/>

 

<CategoryProp NAME=”MetricScope” CHOICES=”DB”/>

<CategoryProp NAME=”DBCategoryDetails” CHOICES=”FullLLFile;none;TRUE;FALSE;1;0″/>

</ValidIf>

<Schedule>

<IntervalSchedule INTERVAL=”30″ TIME_UNIT=”Min”/>

</Schedule>

In order to manually trigger the tablespace full alert, we use the command:

oracle@vm12c:/u01/app/oracle/agent12cR5/ [agent12c] emctl control agent runCollection OMSREP:oracle_database problemTbsp_10i_Loc
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD runCollection completed successfully

There is also an easy way to find all the collectionItemName. All those collections are scheduled by the agent, so if you run the command:

oracle@vmtestoraem12c:/u01/app/oracle/ [agent12c] emctl status agent scheduler | grep oracle_database | grep OMSREP
2015-09-24 15:16:53.222 : oracle_database:OMSREP:UserLocksCollection
2015-09-24 15:16:55.432 : oracle_database:OMSREP:Response
2015-09-24 15:16:57.805 : oracle_database:OMSREP:wait_sess_cls_10i
2015-09-24 15:17:12.566 : oracle_database:OMSREP:observer_11g
2015-09-24 15:17:42.703 : oracle_database:OMSREP:haconfig2_collection
2015-09-24 15:18:41.928 : oracle_database:OMSREP:Recovery_Area
2015-09-24 15:18:51.331 : oracle_database:OMSREP:latest_hdm_findings_coll_item
2015-09-24 15:18:54.788 : oracle_database:OMSREP:dgprimarydb_collection
2015-09-24 15:19:09.136 : oracle_database:OMSREP:adr_alert_log_rollup
2015-09-24 15:19:38.388 : oracle_database:OMSREP:incident_meter
2015-09-24 15:19:59.818 : oracle_database:OMSREP:topWaitEvents_col
2015-09-24 15:20:16.315 : oracle_database:OMSREP:db_alertlog_coll_12
2015-09-24 15:20:32.234 : oracle_database:OMSREP:haconfig4_collection
2015-09-24 15:20:48.166 : oracle_database:OMSREP:memory_usage_coll_item
2015-09-24 15:20:48.307 : oracle_database:OMSREP:activity_pending
2015-09-24 15:20:49.534 : oracle_database:OMSREP:archFull
2015-09-24 15:20:59.886 : oracle_database:OMSREP:haconfig3_collection
2015-09-24 15:21:14.840 : oracle_database:OMSREP:db_inst_pga_alloc_11g
2015-09-24 15:21:43.593 : oracle_database:OMSREP:rac_global_cache_10i
2015-09-24 15:21:47.527 : oracle_database:OMSREP:dataguard_11gR2
2015-09-24 15:22:09.095 : oracle_database:OMSREP:memory_usage_sga_pga
2015-09-24 15:22:11.189 : oracle_database:OMSREP:sga_pool_wastage_10i
2015-09-24 15:22:19.278 : oracle_database:OMSREP:UserAudit
2015-09-24 15:22:21.311 : oracle_database:OMSREP:Database_Resource_Usage_10i
2015-09-24 15:22:45.416 : oracle_database:OMSREP:sysTimeModel_col
2015-09-24 15:23:41.486 : oracle_database:OMSREP:db_inst_opt_sga
2015-09-24 15:23:43.516 : oracle_database:OMSREP:instance_efficiency_10i
2015-09-24 15:23:51.902 : oracle_database:OMSREP:system_response_time_per_call_10i
2015-09-24 15:24:32.947 : oracle_database:OMSREP:wait_bottlenecks_10i
2015-09-24 15:24:53.137 : oracle_database:OMSREP:topSqlMonitoringList_col
2015-09-24 15:25:04.134 : oracle_database:OMSREP:instance_throughput_10i
2015-09-24 15:26:19.263 : oracle_database:OMSREP:service_10i
2015-09-24 15:33:34.681 : oracle_database:OMSREP:log_full
2015-09-24 15:33:35.160 : oracle_database:OMSREP:problemTbsp_10i_Loc
2015-09-24 15:33:35.223 : oracle_database:OMSREP:problemTbsp_10i_Dct
2015-09-24 15:33:35.474 : oracle_database:OMSREP:problemTbspTemp_10i_Loc
2015-09-24 15:33:35.504 : oracle_database:OMSREP:problemTbspUndo_10i_Loc
2015-09-24 15:33:35.797 : oracle_database:OMSREP:dbjob_status
2015-09-24 15:33:37.800 : oracle_database:OMSREP:audit_failed_logins
2015-09-24 15:33:38.363 : oracle_database:OMSREP:aq_monitoring_alerts
2015-09-24 15:33:38.455 : oracle_database:OMSREP:streams_processes_count_item
2015-09-24 15:33:38.762 : oracle_database:OMSREP:streams_statistics
2015-09-24 15:33:40.027 : oracle_database:OMSREP:Temporary File Status
2015-09-24 16:03:34.340 : oracle_database:OMSREP:scn_instance_collection
2015-09-24 16:03:34.671 : oracle_database:OMSREP:db_inst_cpu_usage
2015-09-24 16:03:38.055 : oracle_database:OMSREP:DatabaseVaultRealmViolation_collection
2015-09-24 16:03:38.222 : oracle_database:OMSREP:DatabaseVaultRealmConfigurationIssue_collection
2015-09-24 16:03:38.239 : oracle_database:OMSREP:DatabaseVaultCommandRuleViolation_collection
2015-09-24 16:03:38.542 : oracle_database:OMSREP:DatabaseVaultCommandRuleConfigurationIssue_collection
2015-09-24 16:03:39.403 : oracle_database:OMSREP:DatabaseVaultPolicyChanges_collection
2015-09-24 16:03:39.871 : oracle_database:OMSREP:scn_growth_collection
2015-09-24 18:03:25.455 : oracle_database:OMSREP:oracle_dbconfig
2015-09-24 18:03:33.694 : oracle_database:OMSREP:ocm_instrumentation
2015-09-24 18:03:33.712 : oracle_database:OMSREP:mgmt_sql
2015-09-24 18:03:33.714 : oracle_database:OMSREP:has_resources
2015-09-24 18:03:33.905 : oracle_database:OMSREP:sizeOfOSAuditFiles_collection
2015-09-24 18:03:34.589 : oracle_database:OMSREP:problemSegTbsp
2015-09-24 18:03:34.849 : oracle_database:OMSREP:ha_dg_target_summary
2015-09-24 18:03:34.858 : oracle_database:OMSREP:invalid_objects_rollup
2015-09-24 18:03:35.003 : oracle_database:OMSREP:tbspAllocation
2015-09-24 18:03:35.917 : oracle_database:OMSREP:haconfig1_collection
2015-09-24 18:03:36.848 : oracle_database:OMSREP:audit_failed_logins_historical
2015-09-24 18:03:38.968 : oracle_database:OMSREP:mgmt_database_listener_config
2015-09-24 18:03:38.996 : oracle_database:OMSREP:exadataCollection
2015-09-24 18:03:39.449 : oracle_database:OMSREP:scn_max_collection
2015-09-24 18:03:39.546 : oracle_database:OMSREP:db_instance_caging
2015-09-24 18:03:47.855 : oracle_database:OMSREP:oracle_cdbconfig
2015-09-24 22:08:59.848 : oracle_database:OMSREP:ME$dbsnmp_expiry

 

You will find the entire collection item name you can reevaluate with the emctl control agent runCollection command.

For a host target, the behavior is equivalent:

oracle@vmtestoraem12c:/u01/app/oracle/ [agent12c] emctl status agent scheduler | grep host | grep vmtestoraem12c

2015-09-24 15:22:12.565 : host:vm12c:Processes_diagnosticsLinux
2015-09-24 15:23:45.713 : host:vm12c:LoadLinux
2015-09-24 15:23:52.670 : host:vm12c:ProgramResourceUtilizationLinux
2015-09-24 15:24:10.227 : host:vm12c:PagingActivityLinux
2015-09-24 15:24:41.241 : host:vm12c:TotalDiskUsageLinux
2015-09-24 15:25:34.795 : host:vm12c:CPUUsageLinux
2015-09-24 15:26:19.727 : host:vm12c:FileMonitoringLinux
2015-09-24 15:26:31.792 : host:vm12c:NetworkLinux
2015-09-24 15:27:26.153 : host:vm12c:DiskActivityLinux
2015-09-24 15:33:35.080 : host:vm12c:proc_zombieLinux
2015-09-24 15:33:46.489 : host:vm12c:LogFileMonitorLinux
2015-09-24 15:34:22.695 : host:vm12c:FilesystemsLinux
2015-09-24 18:02:53.840 : host:vm12c:host_storage
2015-09-24 18:02:54.169 : host:vm12c:HostStorageSupport
2015-09-24 18:02:54.201 : host:vm12c:DiscoverTargets
2015-09-24 18:02:55.599 : host:vm12c:oracle_security
2015-09-24 18:02:56.532 : host:vm12c:Swap_Area_StatusLinux
2015-09-24 18:26:28.295 : host:vm12c:ll_host_config

 

Conclusion:

Depending of the metric evaluation frequency, it might be very useful to manually trigger the metric collections in order to cleanup false open incidents in the Enterprise Manager Cloud 12c console.

 

 

Cet article dbsnmp expiring password, manually triggering metrics collections est apparu en premier sur Blog dbi services.

Oracle EM Cloud Control 12c (CC12c)- Repeat alert notification

$
0
0

Oracle EM Cloud Control 12c (CC12c) – Repeat alert notification

In your EM installation, CC12c will notify your different administrators when specific incidents, events, or problems occurs.
But from standard configuration, you will receive only one warning or critical notification/alert … until the Clear or an Acknowledge of the alert.
Some times and for different reasons (human error, email was deleted, Mail server issues, …) you can lost or forgot this alert.
Hummm this is not good … especially for a critical alert on a productive environment!
To avoid such incomfortable case, you can configure Repeat of Notification in your Incident Rules.

Description:
Repeat notifications allow administrators to be notified repeatedly until an incident is either acknowledged or the number of Maximum Repeat Notifications has been reached.
CC12c supports repeat notification for all different notification methods (e-mail, OS command, PL/SQL procedure, and SNMP trap).
To enable this feature for a notification method, select the Send Repeat Notifications option.
In addition to setting the maximum number of repeat notifications, you can also set the time interval at which the notifications are sent.

Very important: Repeat notifications for rules will only be sent if the Send Repeat Notifications option is enabled in the Global Notification Methods page.

Here an example how to configure the Global Notification Methods in the Notification Methods page:

Repeat_in_Notification_Method_DA

Configuring Repeat Notifications in Incident Rules:

Setting repeat notifications globally at the notification method level may not be provide sufficient flexibility. For example, you may want to have different repeat notification settings based on event type. Enterprise Manager accomplishes this by allowing you to set repeat notifications for individual incident rule sets or individual rules within a rule set. Repeat notifications set at the rule level take precedence over those defined at the notification method level.

Here an example how to configure a Repeat Notifications in an individual incident Rule:

Edit one of your individual Incident Rule (within one of your Rule).

Repeat_Incident_rules_DA_Edit_Rule_1

Choose Next in your individual Incident Rule.

Repeat_Incident_rules_DA_Edit_Rule_2

On next page, “Add” or “Edit” an Action.

Repeat_Incident_rules_DA_Edit_Rule_3

In the Action pages, configure your Repeat Notification.

You can choice to select the Global Notification Methods or to create your own method for this rule. Click Continue.

Repeat_Incident_rules_DA_Edit_Rule_4

In the Review page, control your settings. Click Continue.

Repeat_Incident_rules_DA_Edit_Rule_5

Click “Save” to enable your settings. !!! Do not forget to save !!!

Repeat_Incident_rules_DA_Edit_Rule_6

Now you can simulate an incident or event to test that your Repeat settings are working correctly.

 

Visit our partner’s Oracle Website for more informations about Repeat Notification.

Ref: Oracle® Enterprise Manager Cloud Control Administrator’s Guide:
– Chapter 3. Notifications –> http://docs.oracle.com/cd/E25054_01/doc.1111/e24473/notification.htm
– Chapter 4. Using Notifications –> https://docs.oracle.com/cd/E24628_01/doc.121/e24473/notification.htm#EMADM9066
4.1.5 Setting Up Repeat Notifications –> https://docs.oracle.com/cd/E24628_01/doc.121/e24473/notification.htm#EMADM9072

That’s it.

 

Cet article Oracle EM Cloud Control 12c (CC12c)- Repeat alert notification est apparu en premier sur Blog dbi services.

RMAN backup is failing due to “corrupt blocks”

$
0
0

Last week, I was not able to complete a backup because of the ORA-19566 error: “exceeded limit of 0 corrupt blocks”. Here is what you can do to fix it.

Starting Point: backup error

Here is the starting point of this case. The following error is found in the RMAN Backup log file:

allocated channel: ch1
channel ch1: sid=25 devtype=DISK
Starting backup at 27-APR-11
channel ch1: starting incremental level 0 datafile backupset
channel ch1: specifying datafile(s) in backupset
input datafile fno=00006 name=D:MXXRTXIDXXRIDX01.DBF
input datafile fno=00003 name=D:MXXRTXLXRL01.DBF
input datafile fno=00002 name=D:MXXRTXSXRS01.DBF
input datafile fno=00007 name=D:MXXRTXTMPXRTEMP01.DBF
input datafile fno=00004 name=D:MXXRTXBLOBXRBLOB01.DBF
channel ch1: starting piece 1 at 27-APR-11
released channel: ch1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ch1 channel at 04/27/2011 08:53:37
ORA-19566: exceeded limit of 0 corrupt blocks for file D:MXXRTXLXRL01.DBF

Validation of the database

First of all, verify your database with the RMAN validate command to find out which blocks are corrupted.

The RMAN command below will check for physical as well as for logical corruption of the database. Per default, the VALIDATE command check only physical corruptions.

But what is the difference between a logical and a physical corruption?

  • Physical corruption: the block is not recognized.
  • Logical corruption: the contents of the block is logically inconsistent.

For your information: With Oracle 11.2, please use the new command:

RMAN> validate database check logical;

In my case, it was Oracle 9.2, so the command usage is slightly different:

RMAN> backup validate check logical database ;
Starting backup at 06-MAY-11
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=13 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00008 name=D:MXXRTXSYSRBS01.DBF
. . .
channel ORA_DISK_1: backup set complete, elapsed time: 00:14:15
Finished backup at 06-MAY-11
RMAN>

Once the validation command is finished, RMAN populates the findings in the V$DATABASE_BLOCK_CORRUPTION view.
In my case, I then thourougly analyzed the corrupted block reported into the view presented below:

SQL> select * from v$database_block_corruption;
FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
 3     100261          1                  0 FRACTURED

Only one block was reported as fractured. I then tried to create a backup with a number of authorized corrupted blocks.

 

run
{
ALLOCATE CHANNEL ch1 TYPE DISK;
set maxcorrupt for datafile 3 to 10;
. . .
RELEASE CHANNEL ch1;
}

Find the corrupted blocks

the backup run successfully and no corrupted block are found anymore, the next step is to identify which information is saved on the corrupted block.

Using the command below, it is possible to find out which object uses this block.

SQL> select segment_name,owner,segment_type from dba_extents
     where file_id=3
     and 100261 between block_id and block_id + blocks -1;
no rows selected
SQL>

Our corrupted block is an empty block reported as fractured into the view v$database_block_corruption.
Now, what is the definition of a fractured block?

Fractured: Block header looks reasonable, but the front and back of the block are different versions.
Since Oracle 9.2, it is possible to make a block recovery in case a single block is corrupted.

How to fix the corruption

I therefore tried to fix the corrupted block using the command below:

RMAN> blockrecover datafile 3 block 100261;
Starting blockrecover at 06-MAY-11
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of blockrecover command at 05/06/2011 14:33:44
RMAN-05009: Block Media Recovery requires Enterprise Edition

 
Unfortunately blockrecover is not usable because it requires an Oracle Enterprise Edition and we are running an Oracle Standard Edition.
I tried also to repair this block with the package dbms_repair, but dbms_repair doesn’t have any option to fix a empty corrupted block. :-?
After some deeper analyzes, I decided to leave this fractured empty block into the database, as a corrupted unused block is not harmful. When Oracle will reuse this block to assign it to an segment, Oracle will automatically reformat this block, and the problem will be solved.

For your information: We have also the possibility to fix this error, but the Database will need a maintenance windows to export and reimport Data into a new empty created tablespace.

Conclusion

Since Oracle 10.2, RMAN also skips currently unused blocks – as opposed to never used block only in Oracle 9.2. In my case the block was recognized as corrupted, because I was running an Oracle 9.2 database. But in newer Oracle versions, this problem would not have been reported. Oracle would have ignored this corruption and your backup would be running successfully…

 

Cet article RMAN backup is failing due to “corrupt blocks” est apparu en premier sur Blog dbi services.

Block Corruption Oracle 10g vs Oracle 11g

$
0
0

Ayant rencontré récemment des problèmes de blocs corrompus sur des bases Oracle en version Oracle 10g et Oracle 11g, des différences notables sont apparues notamment au niveau de la vue v$database_block_corruption. Je vous livre dans ce post le résultat de mes tests.
Oracle vérifie l’intégrité des données dans un bloc de base de données avant qu’il soit écrit depuis le buffer cache sur le disque. Si une incohérence de bloc est détectée, alors ce bloc est marqué comme corrompu.

Les corruptions peuvent être de deux types, logiques et physiques :
Les corruptions physiques de blocs peuvent être causées par (liste non exhaustive):

  • disques ou contrôleurs de disques en erreur
  • mémoire défectueuse
  • interface d’entrées/sorties défectueuses (par exemple contrôleur disque défectueux)
  • entrées/sorties OS incomplètes

Dans le cas d’une corruption physique, la base de données ne reconnaît pas du tout le bloc : le checksum est invalide, le header et le footer du bloc ne correspondent pas.
Dans le cas d’une corruption logique le contenu du bloc est logiquement inconsistant. Quand RMAN détecte une corruption logique, il laisse une trace dans le fichier alert.log.
Par défaut RMAN ne vérifie que la corruption physique lors du backup et non la corruption logique. Si l’on souhaite faire une vérification de corruption logique on doit le spécifier en utilisant l’option CHECK LOGICAL.
Si on spécifie l’option CHECK LOGICAL dans la commande backup de RMAN, alors RMAN teste les données ainsi que les indexes et laisse une trace dans le fichier alert.log en cas de corruption.
Sur une base de données Oracle 10g, on effectue les opérations suivantes :
On crée un tablespace:

SQL> create tablespace CORRUPT datafile '/testdba/db01/corrupt01.dbf' size 100M; 
Tablespace created.

On crée un utilisateur hack avec des droits appropriés, une table employe et on y insère des données :

SQL> create user hack identified by hack
2 default tablespace corrupt
3 temporary tablespace temp; 
User created. 
SQL> grant connect , resource ,dba to hack;
Grant succeeded. 
SQL> connect hack/hack
Connected. 
SQL> create table employe (name varchar2(10));
Table created. 
SQL> insert into employe values ('john');
1 row created. 
SQL> insert into employe values ('bill');
1 row created. 
SQL> insert into employe values ('brad');
1 row created. 
SQL> insert into employe values ('joe');
1 row created. 
SQL> commit; 
Commit complete.

Les données sont consistantes :

SQL> select * from employe; 
NAME
----------
john
bill
brad
joe

Puis on corrompt volontairement le fichier physique corrupt01.dbf avec la commande dd:
oracle@l113:~/psi/

[TESTDBA] dd of=/testdba/db01/corrupt01.dbf bs=8192 conv=notrunc seek=12 << EOF
> blahblah blahblah blahblah blahblah blahblah blahblah blahblah blahblah> blahblahiblahblah blahblah
> EOF
0+1 records in
0+1 records out
101 bytes (101 B) copied, 8.8e-05 seconds, 1.1 MB/s

La consultation des données de la table employe nous révèle l’existence de blocs corrompus :

SQL> alter system flush buffer_cache;
 System altered.
SQL> select * from employe;
select * from employe
               *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 7, block # 12)
ORA-01110: data file 7: '/testdba/db01/corrupt01.dbf'

L’utilisation de RMAN avec l’option «check logical» valide également l’existence des blocs logiques corrompus. RMAN effectue ainsi à la fois une vérification des corruptions logiques et physiques. Par défaut (sans l’option check logical) RMAN détecte uniquement la corruption physique.

RMAN> backup check logical tablespace corrupt;
 
allocated channel: ORA_DISK_1
 
channel ORA_DISK_1: sid=179 devtype=DISK
 
channel ORA_DISK_1: starting full datafile backupset
 
channel ORA_DISK_1: specifying datafile(s) in backupset
 
input datafile fno=00007 name=/testdba/db01/corrupt01.dbf
 
channel ORA_DISK_1: starting piece 1 at 23-SEP-11
 
RMAN-00571: ===========================================================
 
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
 
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 09/23/2011 15:39:00
ORA-19566: exceeded limit of 0 corrupt blocks for file /testdba/db01/corrupt01.dbf

Par contre la vue v$database_block_corruption ne contient aucun enregistrement :
 

SQL> select * from v$database_block_corruption;
no rows selected

Le fichier alert.log est lui aussi correctement renseigné :

Reread of blocknum=12, file=/testdba/db01/corrupt01.dbf. found same corrupt data
Reread of blocknum=12, file=/testdba/db01/corrupt01.dbf. found same corrupt data
Reread of blocknum=12, file=/testdba/db01/corrupt01.dbf. found same corrupt data
Reread of blocknum=12, file=/testdba/db01/corrupt01.dbf. found same corrupt data
Reread of blocknum=12, file=/testdba/db01/corrupt01.dbf. found same corrupt data

Il est nécessaire de lancer rman avec l’option «validate check logical» pour que la vue v$database_block_corruption soit alimentée (cf metalink note id ID 471716.1)

RMAN> backup validate check logical tablespace corrupt;
 
allocated channel: ORA_DISK_1
 
channel ORA_DISK_1: sid=186 devtype=DISK
 
channel ORA_DISK_1: starting full datafile backupset
 
channel ORA_DISK_1: specifying datafile(s) in backupset
 
input datafile fno=00007 name=/testdba/db01/corrupt01.dbf
 
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
 
Finished backup at 23-SEP-11
Starting backup at 23-SEP-11
SQL> select * from v$database_block_corruption; 
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION
----- ------ ------ ------------------ ----------
    7      12     1                  0    CORRUPT

Nous utilisons la méthode de block recovery pour réparer les blocs corrompus. Il y a bien sur des restrictions pour l’utilisation de cette méthode :

  • la base de données doit être montée ou ouverte
  • on ne peut pas réaliser de block recovery en mode noarchivelog
  • si RMAN n’arrive pas à accéder à une archive de fichier redo log particulière, il réalise un restore failover , essayant d’autres backups disponibles. Si aucun backup n’est disponible, le processus de block recovry via RMAN termine en erreur
  • En version 10g il est nécessaire de posséder un backup full du fichier contenant les blocs corrompus, la méthode de block media recovery ne peut pas utiliser les backups incrémentaux
RMAN> run {
2> allocate channel ch1 type 'sbt_tape';
3> blockrecover corruption list;
4> } allocated channel: ch1
channel ch1: sid=179 devtype=SBT_TAPE
channel ch1: Data Protection for Oracle: version 5.5.2.0 
Starting blockrecover at 23-SEP-11 
channel ch1: restoring block(s)
channel ch1: specifying block(s) to restore from backup set
restoring blocks of datafile 00007
channel ch1: reading from backup piece df_TESTDBA_762622144_23
channel ch1: restored block(s) from backup piece 1
piece handle=df_TESTDBA_762622144_23 tag=TAG20110923T152904
channel ch1: block restore complete, elapsed time: 00:00:16 
starting media recovery
media recovery complete, elapsed time: 00:00:01 
Finished blockrecover at 23-SEP-11released channel: ch1

Les blocs sont maintenant restaurés :

SQL> select * from employe; 
NAME
----
john
bill
brad
joe

Mais la vue v$database_block_corruption contient toujours des données indiquant qu’un bloc est corrompu:

SQL> select * from v$database_block_corruption; 
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION
----- ------ ------ ------------------ ----------
    7     12      1                  0    CORRUPT

Il faut relancer un «backup validate check logical» pour que les données de cette vue soient correctes :

RMAN> backup validate check logical tablespace corrupt;
Starting backup at 23-SEP-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=186 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00007 name=/testdba/db01/corrupt01.dbf
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 23-SEP-11
SQL> select * from v$database_block_corruption;
no rows selected

Effectuons à présent les mêmes opérations avec une bases de données Oracle en version 11g.
La commande dd est utilisée pour corrompre les données :

oracle@l113:~/psi/ 

[DORBELV2] dd of=/dorbelv2/db01/data/corrupt01.dbf bs=8192 conv=notrunc seek=131 << EOF
> blahblah blahblah blahblah blahblah blahblah blahblah blahblah blahblah blahblah
> EOF
0+1 records in
0+1 records out
82 bytes (82 B) copied, 6.8e-05 seconds, 1.2 MB/s
SQL> select * from employe;
select * from employe
               *
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 6, block # 131)
ORA-01110: data file 6: '/dorbelv2/db01/data/corrupt01.dbf'

L’interrogation de la vue v$database_block_corruption montre que cette dernière est déjà alimentée , il n’y a plus lieu de lancer rman avec l’option «validate check logical»:

SQL> select * from v$database_block_corruption; 
 
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTION
----- ------ ------ ------------------ ----------
    6    131      1                   0   CORRUPT

En version 11g on utilise les nouvelles fonctionnalités RMAN pour réparer les blocs corrompus. Il faut bien sûr avant toute opération de restauration analyser quel est le type de segment impacté. Si ce segment est de type index, on peut résoudre le problème rapidement puisque toutes les données nécessaires à la création de l’index sont disponibles. On peut ainsi détruire puis recréer l’index. Si la corruption affecte un tablespace temporaire (temporary tablespace), il est possible de supprimer le tablespace et d’en refaire un nouveau en allouant le nouveau tablespace aux utilisateurs concernés (cf metalink note ID 28814.1)

RMAN> list failure;  List of Database Failures =========================  Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
1042 HIGH OPEN 23-SEP-11 Datafile 6: '/dorbelv2/db01/data/corrupt01.dbf' contains one or more corrupt blocks
RMAN> advise failure;
 List of Database Failures
========================= 
Failure ID Priority Status Time Detected Summary
---------- -------- ------ ------------- -------
      1042     HIGH   OPEN     23-SEP-11 Datafile 6: '/dorbelv2/db01/data/corrupt01.dbf' contains one or more corrupt blocks
analyzing automatic repair options;
this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
analyzing automatic repair options complete
Mandatory Manual Actions
========================
no manual actions available 
Optional Manual Actions
=======================
no manual actions available
Automated Repair Options
========================
Option Repair Description
------ ------------------
 Perform block media recovery of block 131 in file 6
Strategy: The repair includes complete media recovery with no data loss
Repair script: /dorbelv2/sys01/dump/diag/rdbms/dorbelv2/DORBELV2/hm/reco_3548441823.hm

RMAN> repair failure; Strategy: The repair includes complete media recovery with no data loss Repair script: /dorbelv2/sys01/dump/diag/rdbms/dorbelv2/DORBELV2/hm/reco_3548441823.hm
contents of repair script:
# block media recovery 
recover datafile 6 block 131; 
Do you really want to execute the above repair (enter YES or NO)?
yes
executing repair script
Starting recover at 23-SEP-11
using channel ORA_DISK_1 channel ORA_DISK_1:
restoring block(s)channel ORA_DISK_1:
specifying block(s) to restore from backup set
restoring blocks of datafile 00006
channel ORA_DISK_1: reading from backup piece /mnt/oratmp/df_DORBELV2_762623690_3
channel ORA_DISK_1: piece handle=/mnt/oratmp/df_DORBELV2_762623690_3 tag=TAG20110923T155449channel ORA_DISK_1: restored block(s) from backup piece 1
channel ORA_DISK_1: block restore complete, elapsed time: 00:00:01 
starting media recovery
media recovery complete, elapsed time: 00:00:01
Finished recover at 23-SEP-11
repair failure complete

Les données sont à nouveau correctes :

SQL> select * from employe; 
NAME
----
john
bill
brad
joe

La vue v$database_block_corruption est correctement renseignée sans avoir eu besoin de lancer un backup validate:

SQL> select * from v$database_block_corruption;
no rows selected

Ainsi en version 11g, chaque fois qu”il y a une détection de blocs corrompus la vue v$database_block_corruption est alimentée. Une réparation des blocs corrompus entraine une suppression des méta données de cette vue.
Les différentes techniques de réparation de blocs corrompus incluent les méthodes de block recovery, de restauration de fichiers physiques ou de restauration via un backup incrémental. Cependant bien que la méthode de block recovery puisse réparer les corruptions physiques elle ne peut réparer les corruptions logiques (cf metalink note ID 391120.1)

Conclusion :

La version Oracle 11g a amélioré la gestion de la vue v$database_block_corruption ainsi que la détection des blocs corrompus. De plus l’utilisation des nouvelles fonctionnalités RMAN telles que list, advise ou repair failure permettent un diagnostic et une réparation rapide et efficace.
Je recommanderais par ailleurs l’utilisation systématique de l’option check logical dans les ordres de backup RMAN,. Il y a certes un léger overhead au niveau des temps de sauvegarde, mais il est toujours préférable de détecter les blocs corrompus au plus tôt. Dans ce contexte il peut être également intéressant d’évaluer l’utilisation de l’option DB_BLOCK_CHECKING=YES dans le fichier init.ora ou spfile.ora (cf metalink note ID 32969.1). Bien évidemment avant d’implémenter ces solutions en production il est nécessaire de les tester.
De plus en version 11g, la mise en place d’un script/outil retournant la valeur des données de v$database_block_corruption peut s’avérer nécessaire. En effet ajouter un check des blocs corrompus est intéressant mais il convient d’en être informé rapidement.

 

Cet article Block Corruption Oracle 10g vs Oracle 11g est apparu en premier sur Blog dbi services.

Oracle 12c: Pluggable databases not that isolated

$
0
0

As you probably know, the multitenant databases is the new feature of Oracle 12c to solve the dilema of applications segregation. Thanks to a multitenant container database, it allows to manage many databases as one taking advantage of resources consolidation. A perfect way to manage several applications in a single container. However are these plugglable databases that isolated as expected? Not exactly: As I will show in this posting, a single PDB can, under certain conditions, generate a complete system downtime.

How it should always work

Let’s take a basic example with a container database CDBPROD1 in which we set up 2 pluggable databases

  • PDBERP1
  • PDBHR1

So we have one database for the production ERP and one for the Human Resources application.

 

12c-architecture.png

 

SQL> select name,open_mode from v$containers;
NAME          OPEN_MODE
------------- ------------------
CDB$ROOT      READ WRITE
PDB$SEED      READ ONLY
PDBERP1       READ WRITE
PDBHR1        READ WRITE

Being sure that an issue on one application won’t impact any other is a key point in such an architecture.
Imagine that for some reasons, a data file of the ERP database gets lost or corrupted, what would happen?

NAME          FILE_ID    STATUS
------------- ---------- ------------------
PDBHR1        12         NOT ACTIVE
PDBHR1        13         NOT ACTIVE
PDBERP1       8          NOT ACTIVE
PDBERP1       9          NOT ACTIVE
PDBERP1       10         CANNOT OPEN FILE
PDBERP1       11         NOT ACTIVE
PDB$SEED      5          NOT ACTIVE
PDB$SEED      7          NOT ACTIVE
CDB$ROOT      1          NOT ACTIVE
CDB$ROOT      3          NOT ACTIVE
CDB$ROOT      4          NOT ACTIVE
CDB$ROOT      6          NOT ACTIVE

So far the ROOT container as well as any other PDBs are still working fine.

SQL> alter system archive log current;
System altered.

Insofar as we have all necessary backup pieces, we can easily restore the file and get everything back to normal again (even several ways are possible).

oracle@vmoratest12c1:/home/oracle/ [CDBPROD1] rman target sys@PDBERP1
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jun 28 05:47:52 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.All rights reserved.
target database Password: 
connected to target database: CDBPROD1 (DBID=1679079389)
RMAN> alter database datafile 10 offline;
using target database control file instead of recovery catalog
Statement processed
RMAN> restore datafile 10;
Starting restore at 28-JUN-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
…
…
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 28-JUN-13

RMAN> recover datafile 10;
Starting recover at 28-JUN-13
using channel ORA_DISK_1
starting media recovery
archived log for thread 1 with sequence 11 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_11_8wsptjf4_.arc
archived log for thread 1 with sequence 12 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_12_8wspv3ph_.arc
archived log for thread 1 with sequence 13 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_13_8wszvlnc_.arc
...
...
Finished recover at 28-JUN-13

RMAN> alter database datafile 10 online;
Statement processed

That’s it! The ERP database is back to normal while no other PDB has been impacted.

Not always that easy!

Unfortunately, going deeper into our tests, I faced a tricky case where any single pluggable databse (PDB) can generate a downtime on the whole container!
Let’s see what happens if we loose the system tablespace of one of our two PDBs…

SQL> select name,open_mode from v$containers;
NAME          OPEN_MODE
------------- -----------
CDB$ROOT      READ WRITE
PDB$SEED      READ ONLY
PDBERP1       READ WRITE
PDBHR1        READ WRITE

All containers (Root container and Pluggable databases) are currently up and running. But now we are going to delete the SYSTEM data file of the ERP pluggable database / application… yes I know, that’s bad! :roll:

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] rm -f system01.dbf

Of course, this has a huge impact on my ERP application.

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] sqlplus sys@PDBERP1 as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 13:04:39 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Enter password:
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> select * from dba_data_files;
select * from dba_data_files
 *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

As expected, the data file 8, which is the SYSTEM data file for the PDBERP1, is missing.
However, this is only one pluggable database of my whole production environment. So I expect all other applications not to be impacted.

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 13:08:25 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> show CON_NAME
CON_NAME
--------------
CDB$ROOT

SQL> alter system archive log current;
System altered.
SQL> alter session set container=PDBHR1;
Session altered.
SQL> select count(*) from dba_tables; COUNT(*)
----------
 2316

Basically it looks like that is the case, great!
I’m going to take the same way to solve my issue than before. I will simply try to restore my missing data file and get my pluggable database back to work.

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] rman target sys@PDBERP1
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jun 28 13:12:14 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
target database Password: 
connected to target database: CDBPROD1 (DBID=1679079389)
RMAN> alter database datafile 8 offline;
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 06/28/2013 13:12:30
ORA-01541: system tablespace cannot be brought offline; shut down if necessary

Ok it makes sense, I can’t simply set my SYSTEM data file offline while the database is still running. Alright then, I’m going to close my PDB.

Remember that a PDB can’t be shutdown except if the whole container is taken down itself. In fact, with a shutdown command, a pluggable database is set back to mount status.

RMAN> alter pluggable database close;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 06/28/2013 13:15:43
RMAN-00600: internal error, arguments [7530] [] [] [] []
 
RMAN> shutdown immediate
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of shutdown command at 06/28/2013 13:15:53
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
 
RMAN> shutdown abort
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of shutdown command at 06/28/2013 13:15:59
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Looks like RMAN can’t do it itself. Whatever I tried, I’m always getting an error. Sometimes, it even could come in RMAN itself.

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] rman target sys@PDBERP1
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jun 28 13:40:16 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
target database Password: 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database: 
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00604: error occurred at recursive SQL level 2
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00604: error occurr

So I moved to SQLPLUS, on a pluggable level, and took a chance again.

SQL> alter pluggable database close;
alter pluggable database close
*
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> shutdown immediate
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> shutdown abort
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Same story there, as long as I’m connected to the pluggable database level, I can’t close it to run any restore operation. That’s not really surprising in fact, I have to do it on the ROOT container level.

SQL> alter pluggable database PDBERP1 close;
alter pluggable database PDBERP1 close
*
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> alter database datafile 8 offline;
alter database datafile 8 offline
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "8"

SQL> alter database datafile 8 offline drop;
alter database datafile 8 offline drop
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "8"

Now I’m seriously getting nervous, as I can’t get anything done on my PDB. It looks like the only way is to shutdown the whole production container to get rid of this issue…

SQL> show CON_NAME
CON_NAME
------------------------------
CDB$ROOT

SQL> shutdown immediate
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Here is the point: I can’t even take my whole container properly done! The only way to bring it down is to run a shutdown abort!

SQL> shutdown abort
ORACLE instance shut down.

This is definitively not a gentle way to get it done. The final point here, is that my container won’t even start anymore until I get the SYSTEM data file from my single PDB back!

SQL> startup
ORACLE instance started.
Total System Global Area 1636814848 bytes
Fixed Size 2288968 bytes
Variable Size 989856440 bytes
Database Buffers 637534208 bytes
Redo Buffers 7135232 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 8 - see DBWR trace file
ORA-01110: data file 8: '/u01/oradata/CDBPROD1/PDBERP1/system01.dbf'

SQL> select name,open_mode from v$database;
NAME      OPEN_MODE
--------- --------------------
CDBPROD1  MOUNTED

At that point I can go forward with the data file 8 restore and recover to get my database back.

  • restore data file 8
  • recover data file 8
RMAN> connect target /
connected to target database: CDBPROD1 (DBID=1679079389, not open)
 
RMAN> restore datafile 8;
Starting restore at 28-JUN-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISKchannel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/oradata/CDBPROD1/PDBERP1/system01.dbf
…
…
…

RMAN> recover datafile 8;
Starting recover at 28-JUN-13
using channel ORA_DISK_1starting media recoveryarchived log for thread 1 with sequence 11 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_11_8wsptjf4_.arc
archived log for thread 1 with sequence 12 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_12_8wspv3ph_.arc
archived log for thread 1 with sequence 13 is already on disk as file /u90/fast_recovery_area/CDBPROD1/archivelog/2013_06_28/o1_mf_1_13_8wszvlnc_.arc
…
…
…
media recovery complete, elapsed time: 00:00:03
Finished recover at 28-JUN-13
 
RMAN> alter database open;
Statement processed
 

oracle@vmoratest12c1:/u01/oradata/CDBPROD1/PDBERP1/ [CDBPROD1] sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 13:33:34 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
 
SQL> select name,open_mode from v$containers;
NAME         OPEN_MODE
------------ ------------
CDB$ROOT     READ WRITE
PDB$SEED     READ ONLY
PDBERP1      MOUNTED
PDBHR1       MOUNTED
 
SQL> alter pluggable database all open;
Pluggable database altered.
 
SQL> select name,open_mode from v$containers;
NAME          OPEN_MODE
------------- -------------
CDB$ROOT      READ WRITE
PDB$SEED      READ ONLY
PDBERP1       READ WRITE
PDBHR1        READ WRITE

Finally, we made it! Everything is back to normal and all databases are working fine.
As a conclusion, I would say that multitenant databases are pretty interesting and offer nice enhancements in terms of consolidation.
However you need to know that a single PDB, even the least important one, can under certain conditions generate a complete system outage!
I hope that this will help!
If you have any remark or experiences on such tests, feel free to comment.
Cheers :-D

David

PS: I also tried to restore the data file without setting anything offline or down as well as unplugging the PDB, but nothing helped. Restore fails as the data file can’t be locked and should be closed to unplug the PDB. Do see the vicious circle???

 

Cet article Oracle 12c: Pluggable databases not that isolated est apparu en premier sur Blog dbi services.

Oracle 12c Backup & Recovery: What’s new?

$
0
0

After months of waiting, Oracle Database 12c is finally available. In this posting, I am going to provide a summary of the new features of Oracle 12c in terms Backup and Recovery.

Pluggable, what else?

For sure, the most remarkable change in this brand new version is the Pluggable Database or so called Multitenant Database concept, which is a huge change in the whole RDBMS architecture.
So what does it offers in terms of Backup and Recovery?

To answer this question, let’s remember how we were able to perform application consolidation up to version 11g. Basically we had 2 choices:

  1. Merge multiple databases in a single server
  2. Merge multiple applications in a single database

Theoretically, the second options was the one providing the deepest consolidation possible. We simply integrated each applications on a dedicated schema and ran it…

In theory, this was a pretty nice solution, but it also came with multiple questions:

  • Is my data properly segregated?
  • Could an application be disturbing another one?
  • Are there security issues between the applications?

…and the most tricky one:

  • How can I restore one application without touching the others?

At this point, we looked up the RMAN documentation and started sweating when we saw that this RESTORE SCHEMA did not exist. Fortunately, looking a bit deeper we were able to find a “workaround” using Tablespace Point In Time Restore.

Tablespace Point In Time Restore – did you already try that?

Believe me, this is definitively not the most friendly operation to perform on a database, especially if you are using partitioning with one tablespace per partition.

Good news, the real solution now exists! Simply create Pluggable Databases!
We could have a container database (CDB) hosting multiple pluggable databases (PDBs) with each PDB dedicated to one application. Nice, isn’t it? (For more details about PDBs have a look at Yann Neuhaus’ blog post)

At that point, we now have to think about securing our new environment and being able to react in case of failures. Of course, Oracle 12c integrates several new Backup and Recovery features that allow this:

  • Backup
    • Backup (full & incremental) a whole container including all PDBs
    • Skip some PDBs during the container backup
    • Backup (full & incremental) a single pluggable database
    • Backup only specific tablespaces in some PDBs
    • Skip some specific tablespaces for some PDBs
  • Restore
    • Complete restore of a container or a single pluggable
    • Point In Time Restore of a whole container
    • Point In Time Restore of a single pluggable database!

Yes: With the PDBs, you are now able to restore an application (represented by a dedicated PDB) to a previous point in time without impacting the container or any other applications / pluggable database.

RMAN still growing up

Since Oracle 9i, RMAN is growing up and extending its capabilities version after version. This is also true for Oracle Database 12c.

Up to 11g, there were already multiple elements we could restore:

  • Database
  • Tablespace
  • Data Files
  • Corrupted Blocks

With Oracle 12c, it is now possible to restore a single table, even to a previous point in time. If I am not a huge fan of Tablespace Point In Time Restore (it can still help in some cases), the capability of restoring a table is still looking pretty interesting.

I guess that some of you have already faced issues during application patching or maintenance, where you had to bring back a specific table. If no export is available it can easily become a tough challenge. Having the possibility to get it back from a backup can make life quite easier.

Of course the problems with constraints, sequences, or triggers must still be taken into consideration, but we will have time enough to discuss that in a dedicated post about Table Restore.

A nice feature that came with Oracle 11g was the ability to duplicate a database online without using any backup. However the question usually raised while preparing such a duplicate was the impact in terms of performance on the source database, which is often the productive one.

With Oracle 12c, the notion of push and pull methods has been clarified, allowing to better control where the load and processes will mainly run.

From now on, depending on the used method, the processing will either run on the target or the auxiliary database.

Doing backup of big tablespaces or simply tablespaces composed of 20-30 GB data files is not always that easy and that effective. With Oracle 12c, a new parameter has been introduced: SECTION_SIZE.

It allows splitting data files in several pieces and multiplexing their backup across multiple channels. SECTION_SIZE is available for backup sets as well as image copies and duplicates. In case of incremental backups, RMAN can use still use the Block Change Tracking (Fast Incremental Backup) feature.

The next point is the cross-platform data transport. With Oracle 12c, it is now much easier to move data, or even a whole database from one platform to another. The cross-platform transport can now be done using backup sets or image copies.

RMAN backup command supports the TO PLATFORM and FOR TRANSPORT conditions. These allow to generate a meta-data export using Data Pump, which will permit to restore a tablespace or a whole database on a different platform. Tablespaces can now be transported between any supported platform such as Linux, Windows, AIX, Solaris or HP-UX. In case of a complete database transport, the source and destination platforms must use the same ENDIAN format, which can be checked in the view V$TRANSPORTABLE_PLATFORM.

I hope this will help some of you. See you soon for more details on Oracle 12c features :roll:

David

 

Cet article Oracle 12c Backup & Recovery: What’s new? est apparu en premier sur Blog dbi services.

Pgbarman : gestion de sauvegarde et récupération PostgreSQL

$
0
0

Fin 2012, j’ai présenté « pgbarman », une solution de sauvegarde et récupération pour PostgreSQL et décrit son installation. Pgbarman fournit un ensemble de commandes vous permettant la mise en œuvre de sauvegardes vers un serveur dédié. Le principal intérêt est à mon sens la gestion d’un catalogue de vos sauvegardes et la génération de commandes destinées à une reconstruction locale ou sur un serveur tiers.

Je vais maintenant parler de la gestion des sauvegardes et décrire la mise en œuvre d’une reconstruction (PITR) .

La gestion des sauvegardes

Avec Pgbarman nous avons à notre disposition un fichier de configuration et des commandes où nous trouvons les paramètres de gestion de la rétention des  sauvegardes. Depuis la version 1.2  deux politiques de rétention peuvent être appliquées :

  • une politique de rétention basée sur le temps
  • une politique de rétention basée sur le nombre de sauvegarde.

L’exemple de fichier de configuration de Pgbarman nous montre les paramétrages possibles.

;; ; Minimum number of required backups (redundancy)
;; ; minimum_redundancy = 1
;;
;; ; Examples of retention policies
;;
;; ; Retention policy (disabled)
;; ; retention_policy =
;; ; Retention policy (based on redundancy)
;; ; retention_policy = REDUNDANCY 2
;; ; Retention policy (based on recovery window)
;; ; retention_policy = RECOVERY WINDOW OF 4 WEEKS

N.B. : le paramètre minimum_redundancy = 1 empêchera la suppression de la dernière sauvegarde et protègera d’un effacement involontaire…..

Il est possible de voir l’effet de ce paramétrage avec les commandes suivantes :
La commande Barman status qui nous donne un résumé de l’état des sauvegardes :

barman@vmpgdeb1:/etc/barman$ barman status dbi
Server dbi:
    description: dbi PostgreSQL Database
    PostgreSQL version: 9.1.8
    PostgreSQL Data directory: /u01/pgdata/dbi
    archive_command: rsync -a %p barman@vmpgdeb1:/u03/pg/backup/dbi/incoming/%f
    archive_status: last shipped WAL segment 0000000100000000000000D3
    current_xlog: 0000000100000000000000D3
    Retention policies: enforced (mode: auto, retention: REDUNDANCY 4, WAL retention: main)
    No. of available backups: 5
    first available backup: 20130927T173313
    last available backup: 20131010T093548

La commande Barman list-backup qui nous donne la liste détaillée des sauvegardes :

barman list-backup dbi
dbi 20131010T093548 - Thu Oct 10 09:36:02 2013 - Size: 215.0 MiB - WAL Size: 0 B
dbi 20131002T114643 - Wed Oct  2 11:46:53 2013 - Size: 197.0 MiB - WAL Size: 1.0 GiB
dbi 20130929T113103 - Sun Sep 29 11:31:09 2013 - Size: 107.0 MiB - WAL Size: 915.0 MiB
dbi 20130927T174926 - Fri Sep 27 17:51:55 2013 - Size: 107.0 MiB - WAL Size: 6.0 MiB
dbi 20130927T173313 - Fri Sep 27 17:45:59 2013 - Size: 126.0 MiB - WAL Size: 82.0 MiB - OBSOLETE

Comme on peut le voir la politique de rétention est à REDUNDANCY 4.
Nous avons 5 sauvegardes disponibles, c’est pourquoi Pgbarman considère la sauvegarde du  « Sep 27 17:45:59 2013 » comme étant obsolète.
Par conséquence, la prochaine application de la commande cron la purgera :

barman cron
Processing xlog segments for dbi
    0000000100000000000000D3
Deleting backup 20130927T173313 for server dbi
Delete associated WAL segments:
    00000001000000000000001F
    000000010000000000000020
    000000010000000000000020.00000020.backup
    000000010000000000000021
    ......
    000000010000000000000037
    000000010000000000000038
Done

 

barman list-backup dbi
dbi 20131010T093548 - Thu Oct 10 09:36:02 2013 - Size: 215.0 MiB - WAL Size: 16.0 MiB
dbi 20131002T114643 - Wed Oct  2 11:46:53 2013 - Size: 197.0 MiB - WAL Size: 1.0 GiB
dbi 20130929T113103 - Sun Sep 29 11:31:09 2013 - Size: 107.0 MiB - WAL Size: 915.0 MiB
dbi 20130927T174926 - Fri Sep 27 17:51:55 2013 - Size: 107.0 MiB - WAL Size: 6.0 MiB

Il est par ailleurs possible de supprimer une sauvegarde intermédiaire avec la commande delete. Un exemple :

barman list-backup dbi
dbi 20131010T103012 - Thu Oct 10 10:30:22 2013 - Size: 215.0 MiB - WAL Size: 0 B
dbi 20131010T093548 - Thu Oct 10 09:36:02 2013 - Size: 215.0 MiB - WAL Size: 16.0 MiB
dbi 20131002T114643 - Wed Oct  2 11:46:53 2013 - Size: 197.0 MiB - WAL Size: 1.0 GiB
dbi 20130929T113103 - Sun Sep 29 11:31:09 2013 - Size: 107.0 MiB - WAL Size: 915.0 MiB
dbi 20130927T174926 - Fri Sep 27 17:51:55 2013 - Size: 107.0 MiB - WAL Size: 6.0 MiB - OBSOLETEbarman delete dbi 20131010T093548
Deleting backup 20131010T093548 for server dbi
Donebarman list-backup dbi
dbi 20131010T103012 - Thu Oct 10 10:30:22 2013 - Size: 215.0 MiB - WAL Size: 0 B
dbi 20131002T114643 - Wed Oct  2 11:46:53 2013 - Size: 197.0 MiB - WAL Size: 1.0 GiB
dbi 20130929T113103 - Sun Sep 29 11:31:09 2013 - Size: 107.0 MiB - WAL Size: 915.0 MiB
dbi 20130927T174926 - Fri Sep 27 17:51:55 2013 - Size: 107.0 MiB - WAL Size: 6.0 MiB

Comme vous pouvez le constater la notion d’obsolescence de la dernière sauvegarde n’est pas liée au nombre de sauvegarde réalisée mais au nombre résident sur disque.

;

Reconstruction avec retour à une date donnée. ( PITR )

Description de la situation: Nous avons une base dbi sur le serveur vmpgdeb2 sauvegardée sur notre serveur de sauvegarde avec pgbarman.
Nous avons un projet de montée de version de cette base et nous voulons tester la procédure. Pour cela nous allons reconstruire sur la machine vmpgdeb3 une base test à partir d’une sauvegarde de la base au 9 octobre 17:00.

La commande de récupération sera la suivante :

barman recover --remote-ssh-command=''ssh postgres@vmpgdeb3''--target-time ''2013-10-09 17:00:00.000'' dbi 20131002T114643 /u02/pg/data/testdb

Il faudra préalablement autoriser la connexion au serveur vmpgdeb3 et donner les droits de création sur le répertoire /u02/pg/data à l’utilisateur Postgres.
Exécution de la commande qui est plutôt une commande de restoration que de reconstruction :

barman recover --remote-ssh-command=''ssh postgres@vmpgdeb3''--target-time ''2013-10-09 17:00:00.000'' dbi 20131002T114643 /u02/pg/data/testdbStarting remote restore for server dbi using backup 20131002T114643
Destination directory: /u02/pg/data/testdb
Doing PITR. Recovery target time: '2013-10-09 17:00:00'
Copying the base backup.
Copying required wal segments.
Generating recovery.conf
The archive_command was set to 'false' to prevent data losses.Your PostgreSQL server has been successfully prepared for recovery!Please review network and archive related settings in the PostgreSQL
configuration file before starting the just recovered instance.

Le résultat sur le serveur vmpgdeb3 est :

  1. la création d’un répertoire contenant la base
  2. la création dans ce répertoire, en plus des fichiers de la base, du répertoire barman_xlog contenant l’ensemble des fichiers WAL nécessaires à la reconstruction.
  3. La sauvegarde du fichier postgresql.conf sous le nom postgresql.conf.origin
  4. la création d’un fichier recovery.conf ou nous trouverons les commandes de recovery.

Contenu de recovery.conf :

cat recovery.conf
restore_command = ‘cp barman_xlog/%f %p’
recovery_end_command = ‘rm -fr barman_xlog’
recovery_target_time = ‘2013-10-09 17:00:00.000′

Modification de postgresql.conf:

diff postgresql.conf.origin postgresql.conf
183c183,184
< archive_command = 'rsync -a %p barman@vmpgdeb1:/u03/pg/backup/dbi/incoming/%f'    # command to use to archive a logfile segment
---
> #BARMAN# archive_command = 'rsync -a %p barman@vmpgdeb1:/u03/pg/backup/dbi/incoming/%f'    # command to use to archive a logfile segment
> archive_command = false

Avant de lancer la récupération, nous devons modifier les paramètres propres à notre environnement de travail, à savoir :

  • Ajout du cluster dans le fichier postgresql.cnf de notre gestionnaire d’environnement dmkpg
  • Création d’un répertoire d’administration de la base
  • Modification du port de connexion et de la destination du logging

On peut ensuite, une fois l’environnement positionné, démarrer le cluster de db par la commande :

pg_ctl start
server starting

Le processus de récupération démarre immédiatement et s’interrompt lorsqu’il a atteint la dernière transaction complète avant le point de restoration, dans notre cas, la base ayant été arrêtée entre le 6 et le 9 octobre. Le restore n’a pas pu se poursuivre au-delà de la dernière transaction complète enregistrée :

last completed transaction was at log time 2013-10-06 17:19:26.555007+02
2013-10-10 12:03:15 CEST [2597]: [93-1] user=,db= LOG:  restored log file "0000000100000000000000D1" from archive

Un examen des WAL montre que la base a produit des archives depuis le 6 octobre, mais uniquement parce que le paramètre archive_timeout = 3600 était actif ou parce que le cluster de base avait redémarré :

-rw------- 1 barman postgres 16777216 Oct  9 11:47 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CA
-rw------- 1 barman postgres 16777216 Oct  9 12:47 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CB
-rw------- 1 barman postgres 16777216 Oct  9 13:47 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CC
-rw------- 1 barman postgres 16777216 Oct  9 14:47 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CD
-rw------- 1 barman postgres 16777216 Oct  9 15:47 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CE
-rw------- 1 barman postgres 16777216 Oct  9 16:24 /u03/pg/backup/dbi/wals/0000000100000000/0000000100000000000000CF

Une fois le « recover » fini, le moteur de postgres renomme le fichier recovery.conf en recovery.done et ouvre la base avec une new timeline et produit des fichiers WAL avec cette nouvelle timeline ce qui donne :

-rw-------  1 postgres postgres 16777216 Oct 10 12:03 0000000200000000000000D1
-rw-------  1 postgres postgres 16777216 Oct 10 12:07 0000000200000000000000D2
-rw-------  1 postgres postgres 16777216 Oct 10 12:11 0000000200000000000000D3
-rw-------  1 postgres postgres 16777216 Oct 10 12:11 0000000200000000000000D4

La base est immédiatement disponible en fin de récupération.

Conclusion

Pgbarman est une solution simple et efficace de sauvegarde de vos bases PostgreSQL, il n’existe pas encore de commandes permettant de mettre en œuvre une réplication de base à partir de Pgbarman. Le chemin restant à faire ne semblant pas très grand, l’équipe de développement de 2nd Quadrant y pense et est ouverte au proposition de sponsoring d’une telle fonctionnalité.

 

Cet article Pgbarman : gestion de sauvegarde et récupération PostgreSQL est apparu en premier sur Blog dbi services.


The consequences of NOLOGGING in Oracle

$
0
0

While answering to a question on Oracle forum about NOLOGGING consequences, I provided a test case that deserves a bit more explanation. Nologging operations are good to generate minimal redo on bulk operations (direct-path inserts, index creation/rebuild). But in case we have to restore a backup that was made before the nologging operation, we loose data. And even if we can accept that, we have some manual operations to do.

Here is the full testcase.

I create a tablespace and backup it:

RMAN> create tablespace demo datafile '/tmp/demo.dbf' size 10M; 
Statement processed
RMAN> backup tablespace demo; 
Starting backup at 23-MAR-14 
allocated channel: ORA_DISK_1 
channel ORA_DISK_1: SID=30 device type=DISK 
channel ORA_DISK_1: starting full datafile backup set 
channel ORA_DISK_1: specifying datafile(s) in backup set 
input datafile file number=00005 name=/tmp/demo.dbf 
channel ORA_DISK_1: starting piece 1 at 23-MAR-14 
channel ORA_DISK_1: finished piece 1 at 23-MAR-14 
piece handle=/u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp tag=TAG20140323T160453 comment=NONE 
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 
Finished backup at 23-MAR-14

I create a table and an index, both in NOLOGGING

RMAN> create table demo ( dummy not null ) tablespace demo nologging as select * from dual connect by level Statement processed
RMAN> create index demo on demo(dummy) tablespace demo nologging; 
Statement processed

Note how I like 12c for doing anything from RMAN…
Because I will need it later, I do a treedump of my index:

RMAN> begin 
2>  for o in (select object_id from dba_objects where owner=user and object_name='DEMO' and object_type='INDEX') 
3>   loop execute immediate 'alter session set tracefile_identifier=''treedump'' events ''immediate trace name treedump level '||o.object_id||''''; 
4> end loop; 
5> end; 
6> / 
Statement processed

Here is the content of my treedump trace file:

----- begin tree dump 
branch: 0x140008b 20971659 (0: nrow: 2, level: 1) 
   leaf: 0x140008c 20971660 (-1: nrow: 552 rrow: 552) 
   leaf: 0x140008d 20971661 (0: nrow: 448 rrow: 448) 
----- end tree dump

Because of the nologging, the tablespace is ‘unrecoverable’ and we will see what it means.

RMAN> report unrecoverable; 
Report of files that need backup due to unrecoverable operations 
File Type of Backup Required Name 
---- ----------------------- ----------------------------------- 
5    full or incremental     /tmp/demo.dbf

RMAN tells me that I need to do a backup, which is the right thing to do after nologging operations. But here my goal is to show what happens when we have to restore a backup that was done before the nologging operations.

I want to show that the issue does not only concern the data that I’ve loaded, but any data that may come later in the blocks that have been formatted by the nologging operation. So I’m deleteing the rows and inserting a new one.

2> delete from demo; 
Statement processed
RMAN> insert into demo select * from dual; 
Statement processed

Time to restore the tablespace from the backup that has been done before the nologging operation:

RMAN> alter tablespace demo offline; 
Statement processed
RMAN> restore tablespace demo; 
Starting restore at 23-MAR-14 
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile backup set restore 
channel ORA_DISK_1: specifying datafile(s) to restore from backup set 
channel ORA_DISK_1: restoring datafile 00005 to /tmp/demo.dbf 
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp 
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/U1/backupset/2014_03_23/o1_mf_nnndf_TAG20140323T160453_9lxy0pfb_.bkp tag=TAG20140323T160453 
channel ORA_DISK_1: restored backup piece 1 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 
Finished restore at 23-MAR-14
RMAN> recover tablespace demo; 
Starting recover at 23-MAR-14 
using channel ORA_DISK_1
starting media recovery 
media recovery complete, elapsed time: 00:00:00
Finished recover at 23-MAR-14
RMAN> alter tablespace demo online; 
Statement processed

We can check the unrecoverable tablespace

RMAN> report unrecoverable; 
Report of files that need backup due to unrecoverable operations 
File Type of Backup Required Name 
---- ----------------------- ----------------------------------- 
5    full or incremental     /tmp/demo.dbf

but we don’t know which objects are concerned until we try to read from them:

RMAN> select /*+ full(demo) */ count(*) from demo; 
RMAN-00571: =========================================================== 
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== 
RMAN-00571: =========================================================== 
RMAN-03002: failure of sql statement command at 03/23/2014 16:05:03 
ORA-01578: ORACLE data block corrupted (file # 5, block # 131) 
ORA-01110: data file 5: '/tmp/demo.dbf' 
ORA-26040: Data block was loaded using the NOLOGGING option
RMAN> select /*+ index(demo) */ count(*) from demo; 
RMAN-00571: =========================================================== 
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== 
RMAN-00571: =========================================================== 
RMAN-03002: failure of sql statement command at 03/23/2014 16:05:04 
ORA-01578: ORACLE data block corrupted (file # 5, block # 140) 
ORA-01110: data file 5: '/tmp/demo.dbf' 
ORA-26040: Data block was loaded using the NOLOGGING option

So I can’t read from the table because of block (file # 5, block # 131) which is corrupted and I can’t read from the index because of block (file # 5, block # 140) which is corrupted. The reason is that recovery was not possible on them as there was no redo to protect them from the time they were formatted (by the nologging operation).

Let’s see which blocks were reported:

RMAN> select segment_type,header_file,header_block , dbms_utility.make_data_block_address(header_file,header_block) from dba_segments where owner=user and segment_name='DEMO'; 
SEGMENT_TYPE       HEADER_FILE HEADER_BLOCK 
------------------ ----------- ------------ 
DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(HEADER_FILE,HEADER_BLOCK) 
-------------------------------------------------------------- 
INDEX                        5          138 
                                                      20971658
TABLE                        5          130 
                                                      20971650
RMAN> select dbms_utility.make_data_block_address(5, 140) from dual;
DBMS_UTILITY.MAKE_DATA_BLOCK_ADDRESS(5,140) 
------------------------------------------- 
                                   20971660

The full scan failed as soon as it reads the block 131 which is the first one that contains data. The segment header block itself was protected by redo.

For the index the query failed on block 140 which is the first leaf (this is why I did a treedump above). The root branch (which is always the next after the segment header) seem to be protected by redo even for nologging operation. The reason why I checked that is because in the first testcase I posted in the forum, I had a very small table for which the index was so small that it had only one leaf – which is the root branch as well – so the index was still recovrable.

The important point to know is that the index is still valid:

RMAN> select status from all_indexes where index_name='DEMO'; 
STATUS   
-------- 
VALID

And the only solution is to truncate the table:

RMAN> truncate table demo; 
Statement processed
RMAN> select /*+ full(demo) */ count(*) from demo; 
  COUNT(*) 
---------- 
         0
RMAN> select /*+ index(demo) */ count(*) from demo; 
  COUNT(*) 
---------- 
         0

no corruption anymore, but no data either…

Last point: if you have only the indexes that are unrecoverable, you can rebuild them. But because the index is valid, Oracle will try to read it in order to rebuild it – and fail with ORA-26040. You have to make then unusable before.

The core message is:

  • Use nologging only when you accept to loose data and you accept to have some manual operations to do after recovery (so document it): truncate table, make indexes unusable and rebuild.
  • Backup the unrecoverable tablespaces as soon as you can after your nologging operations
  • If you need redo for other goals (such as standby database) use force logging.
 

Cet article The consequences of NOLOGGING in Oracle est apparu en premier sur Blog dbi services.

PDB media failure may cause the whole CDB to crash

$
0
0

Do you remember last year, when 12c arrived with multitenant, David Hueber warned us about the fact that a single PDB can, under certain conditions, generate a complete system downtime? We are beta testers and opened a SR for that. Now one year later the first patchset is out and obviously I checked if the issue was fixed. It’s a patchset afterall, which is expected to fix issues before than bringing new features.

So the issue was that when the SYSTEM tablespace is lost in a PDB, then we cannot restore it without shutting down the whole CDB. This is because we cannot take the SYSTEM tablespace offline, and we cannot close the PDB as a checkpoint cannot be done. There is no SHUTDOWN ABORT for a PDB that can force to it. Conclusion: if you loose one SYSTEM tablespace, either you accept to wait for a maintenance window before bring it back online, or you have to stop the whole CDB with a shutdown abort.

When I receive a new release, I like to check new parameters, even the undocumented ones. And in 12.1.0.2 there is a new underscore parameter _enable_pdb_close_abort which has the description ‘Enable PDB shutdown abort (close abort)’. Great. It has a default value of false but maybe this is how the bug has been addressed.

Before trying that parameter, let’s reproduce the case:

Here are my datafiles:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB_SITE1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/CDB/system01.dbf
3    680      SYSAUX               NO      /u01/app/oracle/oradata/CDB/sysaux01.dbf
4    215      UNDOTBS1             YES     /u01/app/oracle/oradata/CDB/undotbs01.dbf
5    250      PDB$SEED:SYSTEM      NO      /u01/app/oracle/oradata/CDB/pdbseed/system01.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/CDB/users01.dbf
7    540      PDB$SEED:SYSAUX      NO      /u01/app/oracle/oradata/CDB/pdbseed/sysaux01.dbf
8    250      PDB1:SYSTEM          NO      /u01/app/oracle/oradata/CDB/PDB1/system01.dbf
9    570      PDB1:SYSAUX          NO      /u01/app/oracle/oradata/CDB/PDB1/sysaux01.dbf
10   5        PDB1:USERS           NO      /u01/app/oracle/oradata/CDB/PDB1/PDB1_users01.dbf
11   250      PDB2:SYSTEM          NO      /u01/app/oracle/oradata/CDB/PDB2/system01.dbf
12   570      PDB2:SYSAUX          NO      /u01/app/oracle/oradata/CDB/PDB2/sysaux01.dbf
13   5        PDB2:USERS           NO      /u01/app/oracle/oradata/CDB/PDB2/PDB2_users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    197      TEMP                 32767       /u01/app/oracle/oradata/CDB/temp01.dbf
2    100      PDB$SEED:TEMP        32767       /u01/app/oracle/oradata/CDB/pdbseed/pdbseed_temp012014-06-15_09-46-11-PM.dbf
3    20       PDB1:TEMP            32767       /u01/app/oracle/oradata/CDB/PDB1/temp012014-06-15_09-46-11-PM.dbf
4    20       PDB2:TEMP            32767       /u01/app/oracle/oradata/CDB/PDB2/temp012014-06-15_09-46-11-PM.dbf

then I just remove the PDB2 SYSTEM datafile:

rm /u01/app/oracle/oradata/CDB/PDB2/system01.dbf 

And I go to sqlplus in order to check the state of my pdb. Remeber, I want to see if I can restore the datafile without doing a shutdown abort on my CDB instance.

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 27 20:31:45 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> select name,open_mode from v$pdbs;
select name,open_mode from v$pdbs
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

Oh… that’s bad… Let’s look at the alert.log:

Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_21620.trc:
ORA-01243: system tablespace file suffered media failure
ORA-01116: error in opening database file 11
ORA-01110: data file 11: '/u01/app/oracle/oradata/CDB/PDB2/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
USER (ospid: 21620): terminating the instance due to error 1243
System state dump requested by (instance=1, osid=21620 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_diag_21608_20140727202844.trc
2014-07-27 20:28:49.596000 +02:00
Instance terminated by USER, pid = 21620

The CKPT process has terminated the instance. The whole CDB is down.

That’s worse. In 12.1.0.1 we had to bring down the instance, but at least we were able to choose the time and warn the users. Not here. In 12.1.0.2 it crashes immediately when a checkpoint occurs.

I’ve opened a bug for that (Bug 19001390 – PDB SYSTEM TABLESPACE MEDIA FAILURE CAUSES THE WHOLE CDB TO CRASH) which is expected to be fixed for the next release (12.2).

Ok the good news is that once the CDB is down, recovery is straightforward:

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 27 21:36:22 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup

Oracle instance started
database mounted
database opened

Total System Global Area     838860800 bytes

Fixed Size                     2929936 bytes
Variable Size                616565488 bytes
Database Buffers             213909504 bytes
Redo Buffers                   5455872 bytes


RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
3353       CRITICAL OPEN      27-JUL-14     System datafile 11: '/u01/app/oracle/oradata/CDB/PDB2/system01.dbf' is missing
245        HIGH     OPEN      27-JUL-14     One or more non-system datafiles need media recovery


RMAN> advise failure;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected Summary
---------- -------- --------- ------------- -------
3353       CRITICAL OPEN      27-JUL-14     System datafile 11: '/u01/app/oracle/oradata/CDB/PDB2/system01.dbf' is missing
245        HIGH     OPEN      27-JUL-14     One or more non-system datafiles need media recovery

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=132 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
1. If file /u01/app/oracle/oradata/CDB/PDB2/system01.dbf was unintentionally renamed or moved, restore it
2. Automatic repairs may be available if you shutdown the database and restart it in mount mode
3. If you restored the wrong version of data file /u01/app/oracle/oradata/CDB/PDB2/sysaux01.dbf, then replace it with the correct one
4. If you restored the wrong version of data file /u01/app/oracle/oradata/CDB/PDB2/PDB2_users01.dbf, then replace it with the correct one

Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 11; Recover datafile 12; Recover datafile 13
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /u01/app/oracle/diag/rdbms/cdb/CDB/hm/reco_3711091289.hm


RMAN> repair failure;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/cdb/CDB/hm/reco_3711091289.hm

contents of repair script:
   # restore and recover datafile
   restore ( datafile 11 );
   recover datafile 11;
   # recover datafile
   recover datafile 12, 13;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting restore at 27-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/CDB/PDB2/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB/FECFFDC5F6D31F5FE043D74EA8C0715F/backupset/2014_07_28/o1_mf_nnndf_TAG20140728T150921_9xdlw21n_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB/FECFFDC5F6D31F5FE043D74EA8C0715F/backupset/2014_07_28/o1_mf_nnndf_TAG20140728T150921_9xdlw21n_.bkp tag=TAG20140728T150921
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 27-JUL-14

Starting recover at 27-JUL-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 27-JUL-14

Starting recover at 27-JUL-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 27-JUL-14
repair failure complete


RMAN> alter pluggable database PDB2 open;

Statement processed

I whish that one day the PDB will have true isolation so that I can give DBA rights to the application owner on his PDB. But that means that, at least:

  • A PDB failure cannot crash the CDB instance.
  • A PDB admin cannot create datafiles anywhere on my server.
  • A PDB admin cannot run anything as the instance owner user (usually oracle)
 

Cet article PDB media failure may cause the whole CDB to crash est apparu en premier sur Blog dbi services.

What’s the consequence of NOLOGGING loads?

$
0
0

When you load data in direct-path and have the NOLOGGING attribute set, you minimize redo generation, but you take the risk, in case of media recovery, to loose the data in the blocks that you’ve loaded. So you probably run a backup as soon as the load is done. But what happens if you have a crash, with media failure, before the backup is finish?

I encountered recently the situation but – probably because of a bug – the result was not exactly what I expected. Of course, before saying that it’s a bug I need to clear any doubt about what I think is the normal situation. So I’ve reproduced the normal situation and I’m sharing it here in case someone wants to see how to handle it.

First, let me emphasize something that is very important. I didn’t say that you can loose the data that you’ve loaded. You loose the data which were in the blocks that have been allocated by your load. It may concern conventional DML happening long time after the nologging load. And anyway, you probably loose the whole table (or partition) because as you will see the proper way to recover from nologging recovery is to truncate the table (or partition).

I’m in 12c so I can run my SQL statements from RMAN. I create a DEMO tablespace and a 1000 rows table in it:

RMAN> echo set on

RMAN> create tablespace DEMO datafile '/tmp/demo.dbf' size 10M;
Statement processed

RMAN> create table DEMO.DEMO pctfree 99 tablespace DEMO nologging as select * from dual connect by level commit;
Statement processed

Imagine that I’ve a media failure and I have to restore my tablespace:

RMAN> alter tablespace DEMO offline;
Statement processed


RMAN> restore tablespace DEMO;
Starting restore at 04-SEP-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK

creating datafile file number=2 name=/tmp/demo.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 04-SEP-14

and recover up to the point of failure:

RMAN> recover tablespace DEMO;
Starting recover at 04-SEP-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 04-SEP-14

RMAN> alter tablespace DEMO online;
Statement processed

Then here is what happen when I want to query the table where I’ve loaded data without logging:

RMAN> select count(*) from DEMO.DEMO;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 09/04/2014 16:21:27
ORA-01578: ORACLE data block corrupted (file # 2, block # 131)
ORA-01110: data file 2: '/tmp/demo.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

Let’s see that:

RMAN> validate tablespace DEMO;
Starting validate at 04-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00002 name=/tmp/demo.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     167            974          1280            6324214
  File Name: /tmp/demo.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              306

Finished validate at 04-SEP-14

167 blocks have been marked as corrupt.

The solution is to truncate the concerned table.

And if you don’t know what are the tables that are concerned then you need to check v$database_block_corruption and dba_extents. So, my advise is that the tables loaded in NOLOGGING should be documented in the recovery plan, with the way to reload the data. Of course, that’s not an easy task because NOLOGGING is usually done by developers and recovery is done by the DBA. The other alternative is to prevent any NOLOGGING operation and put the database in FORCE LOGGING. In a Data Guard configuration, you should do that anyway.

So I truncate my table:

RMAN> truncate table DEMO.DEMO;
Statement processed

and if I check my tablespace, I still see the blocks as ‘Marked Corrupt':

RMAN> validate tablespace DEMO;
Starting validate at 04-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00002 name=/tmp/demo.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     167            974          1280            6324383
  File Name: /tmp/demo.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              0
  Index      0              0
  Other      0              306

Finished validate at 04-SEP-14

This is the normal behaviour. The blocks are still marked as corrupt until they are formatted again.

I put back my data;

RMAN> insert /*+ append */ into DEMO.DEMO select * from dual connect by level commit;
Statement processed

And check my tablespace again:

RMAN> validate tablespace DEMO;
Starting validate at 04-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
input datafile file number=00002 name=/tmp/demo.dbf
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
2    OK     0              974          1280            6324438
  File Name: /tmp/demo.dbf
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              167
  Index      0              0
  Other      0              139

Finished validate at 04-SEP-14

The 167 corrupted blocks have been reused, now being safe and containing my newly loaded data.

This is the point I wanted to validate because I’ve seen a production database where the blocks remained marked as corrupted. The load has allocated exents containing those blocks but, fortunately, has avoided to put rows in it. However, monitoring is still reporting corrupt blocks and we have to fix that as soon as we can move the tables to another tablespace.

Last point. If you want to see if some tablespace had NOLOGGING operations since the last backup, run:

RMAN> report unrecoverable;
Report of files that need backup due to unrecoverable operations
File Type of Backup Required Name
---- ----------------------- -----------------------------------
2    full                    /tmp/demo.dbf

This is an indication that you should backup that datafile now. Knowing the objects concerned if a lot more complex…

I’ll not open a SR as I can’t reproduce the issue I encountered (corrupt flag remaining after reallocating blocks) but if anyone had that kind of issue, please share.

 

Cet article What’s the consequence of NOLOGGING loads? est apparu en premier sur Blog dbi services.

Is CDB stable after one patchset and two PSU?

$
0
0

There has been the announce that non-CDB is deprecated, and the reaction that CDB is not yet stable.

Well. Let’s talk about the major issue I’ve encountered. Multitenant is there for consolidation. What is the major requirement of consolidation? It’s availability. If you put all your databases into one server and managed by one instance, then you don’t expect a failure.

When 12c was out (and even earlier as we are beta testers) – 12.1.0.1 – David Hueber has encountered an important issue. When a SYSTEM datafile was lost, then we cannot revocer it without stopping the whole CDB. That’s bad of course.

When Patchet 1 was out  (and we were beta tester again) I tried to check it that had been solved. I’ve seen that they had introduced the undocumented “_enable_pdb_close_abort” parameter in order to allow a shutdown abort of a PDB. But that was worse. When I dropped a SYSTEM datafile the whole CDB instance crashed immediately. I opened a SR and Bug 19001390 ‘PDB system tablespace media failure causes the whole CDB to crash’ was created for that. All is documented in that blog post.

Now the bug status is: fixed in 12.1.0.2.1 (Oct 2014) Database Patch Set Update

Good. I’ve installed the latest PSU which is 12.1.0.2.2 (Jan 2015) And I test the most basic recovery situation: loss of a non-system tablespace in one PDB.

Here it is:

 

RMAN> report schema;
Report of database schema for database with db_unique_name CDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
—- ——– ——————– ——- ————————
1 800 SYSTEM YES /u02/oradata/CDB/system01.dbf
3 770 SYSAUX NO /u02/oradata/CDB/sysaux01.dbf
4 270 UNDOTBS1 YES /u02/oradata/CDB/undotbs01.dbf
5 250 PDB$SEED:SYSTEM NO /u02/oradata/CDB/pdbseed/system01.dbf
6 5 USERS NO /u02/oradata/CDB/users01.dbf
7 490 PDB$SEED:SYSAUX NO /u02/oradata/CDB/pdbseed/sysaux01.dbf
11 260 PDB2:SYSTEM NO /u02/oradata/CDB/PDB2/system01.dbf
12 520 PDB2:SYSAUX NO /u02/oradata/CDB/PDB2/sysaux01.dbf
13 5 PDB2:USERS NO /u02/oradata/CDB/PDB2/PDB2_users01.dbf
14 250 PDB1:SYSTEM NO /u02/oradata/CDB/PDB1/system01.dbf
15 520 PDB1:SYSAUX NO /u02/oradata/CDB/PDB1/sysaux01.dbf
16 5 PDB1:USERS NO /u02/oradata/CDB/PDB1/PDB1_users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
—- ——– ——————– ———– ——————–
1 60 TEMP 32767 /u02/oradata/CDB/temp01.dbf
2 20 PDB$SEED:TEMP 32767 /u02/oradata/CDB/pdbseed/pdbseed_temp012015-02-06_07-04-28-AM.dbf
3 20 PDB1:TEMP 32767 /u02/oradata/CDB/PDB1/temp012015-02-06_07-04-28-AM.dbf
4 20 PDB2:TEMP 32767 /u02/oradata/CDB/PDB2/temp012015-02-06_07-04-28-AM.dbf

RMAN> host “rm -f /u02/oradata/CDB/PDB1/PDB1_users01.dbf“;
host command complete

RMAN> alter system checkpoint;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
RMAN-03002: failure of sql statement command at 02/19/2015 22:51:55
ORA-03113: end-of-file on communication channel
Process ID: 19135
Session ID: 357 Serial number: 41977
ORACLE error from target database:
ORA-03114: not connected to ORACLE

 

Ok, but I have the PSU:

 

$ /u01/app/oracle/product/12102EE/OPatch/opatch lspatches
19769480;Database Patch Set Update : 12.1.0.2.2 (19769480)

 

Here is the alert.log:

 

Completed: alter database open
2015-02-19 22:51:46.460000 +01:00
Shared IO Pool defaulting to 20MB. Trying to get it from Buffer Cache for process 19116.
===========================================================
Dumping current patch information
===========================================================
Patch Id: 19769480
Patch Description: Database Patch Set Update : 12.1.0.2.2 (19769480)
Patch Apply Time: 2015-02-19 22:14:05 GMT+01:00
Bugs Fixed: 14643995,16359751,16870214,17835294,18250893,18288842,18354830,
18436647,18456643,18610915,18618122,18674024,18674047,18791688,18845653,
18849537,18885870,18921743,18948177,18952989,18964939,18964978,18967382,
18988834,18990693,19001359,19001390,19016730,19018206,19022470,19024808,
19028800,19044962,19048007,19050649,19052488,19054077,19058490,19065556,
19067244,19068610,19068970,19074147,19075256,19076343,19077215,19124589,
19134173,19143550,19149990,19154375,19155797,19157754,19174430,19174521,
19174942,19176223,19176326,19178851,19180770,19185876,19189317,19189525,
19195895,19197175,19248799,19279273,19280225,19289642,19303936,19304354,
19309466,19329654,19371175,19382851,19390567,19409212,19430401,19434529,
19439759,19440586,19468347,19501299,19518079,19520602,19532017,19561643,
19577410,19597439,19676905,19706965,19708632,19723336,19769480,20074391,
20284155
===========================================================
2015-02-19 22:51:51.113000 +01:00
db_recovery_file_dest_size of 4560 MB is 18.72% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Setting Resource Manager plan SCHEDULER[0x4446]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager CDB plan DEFAULT_MAINTENANCE_PLAN via parameter
2015-02-19 22:51:54.892000 +01:00
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: ‘/u02/oradata/CDB/PDB1/PDB1_users01.dbf
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Errors in file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ckpt_19102.trc:
ORA-63999: data file suffered media failure
ORA-01116: error in opening database file 16
ORA-01110: data file 16: ‘/u02/oradata/CDB/PDB1/PDB1_users01.dbf’
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
USER (ospid: 19102): terminating the instance due to error 63999
System state dump requested by (instance=1, osid=19102 (CKPT)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_diag_19090_20150219225154.trc
ORA-1092 : opitsk aborting process
2015-02-19 22:52:00.067000 +01:00
Instance terminated by USER, pid = 19102

 

You can see the bug number in ‘bug fixed’ and the instance is still terminating after media failure on a PDB datafile. That’s bad news. 

 

I’ve lost one datafile. At first checkpoint the CDB is crashed. I’ll have to open an SR again. But for sure consolidation through multitenancy architecture is not yet for sensible production.

 

Cet article Is CDB stable after one patchset and two PSU? est apparu en premier sur Blog dbi services.

Why Perl is my choice for scripting

$
0
0

I have been advising to use Perl for a long time in order to automate Oracle processes and operations. This week however, I tried for once to write a small procedure on a simple Linux shell (ksh and bash). This posting focuses on the shell internals and “nightmares” more than on Oracle related issues.

The goal of this procedure was to “send” a “relocate” to an Oracle Management Server after a failover at the target database. For this purpose, I had to retrieve some information about the Grid Control 11g configuration – OMS host, agent URL, etc.

In order to retrieve the OMS and AGENT URLs, I used the “emctl status agent” command:

oracle@server1.company.com:/u00/ [AGENT11G] emctl status agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 11.1.0.1.0
OMS Version       : 11.1.0.1.0
Protocol Version  : 11.1.0.0.0
Agent Home        : /u00/app/oracle/product/agent/agent11g
Agent binaries    : /u00/app/oracle/product/agent/agent11g
Agent Process ID  : 26194
Parent Process ID : 5863
Agent URL         : https://server1.company.com:3872/emd/main/
Repository URL    : https://oemgrid.company.com:4900/em/upload
Started at        : 2011-10-30 02:18:43
Started by user   : oracle
Last Reload       : 2012-01-30 11:52:55
Last successful upload                       : 2012-02-06 12:08:06
Total Megabytes of XML files uploaded so far :  7760.72
Number of XML files pending upload           :        0
Size of XML files pending upload(MB)         :     0.00
Available disk space on upload filesystem    :    60.78%
Last successful heartbeat to OMS             : 2012-02-06 12:18:03
---------------------------------------------------------------
Agent is Running and Ready

For this purpose I wrote a small loop in order to get all the lines in a “shell table”. This offered me the possibility to scan the table afterwards and work on the required variables.

i=0
emctl status agent | grep URL | awk '{print $4}' | while read line
do
var[$i]=$line
let "i = $i + 1"
done
echo ${var[0]}
echo ${var[1]}

The output is, as expected:

URL of the Agent:
oracle@server1.company.com:/u00/ [AGENT11G] echo ${var[0]}
https://server1.company.com:3872/emd/main/

URL of the Oracle Management Server (OMS):
oracle@server1.company.com:/u00/ [AGENT11G] echo ${var[1]}
https://doemgrid.company.com:4900/em/upload

The used shell was Korn Shel on a Red Hat Linux:

oracle@server1.company.com:/u00/ [AGENT11G] rpm -qa | grep ksh
ksh-20100202-1.el5

oracle@server1.company.com:/u00/ [AGENT11G] cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.5 (Tikanga)

Unfortunately while running exactly the same code on the following platform (still Korn shell and still Red Hat , but a bit “older”) :

oracle@server2.mycompany.com:~/ [AGENT11G] cat /etc/redhat-release
Red Hat Enterprise Linux AS release 4 (Nahant Update 8)

oracle@server2.mycompany.com:~/ [AGENT11G] rpm -qa | grep ksh
pdksh-5.2.14-30.6

The shell table “var” was not available after the execution of the loop (!):

oracle@server2.mycompany.com:~/ [AGENT11G] echo ${var[0]}

oracle@server2.mycompany.com:~/ [AGENT11G] echo ${var[1]}

It is also worth to mention that, unfortunately, this piece of code does not work on bash shells.

This confirmed my preference of Perl for these kinds of operations and automations. The several Linux/UNIX shells definitively do behave in quite a different way.

 

Cet article Why Perl is my choice for scripting est apparu en premier sur Blog dbi services.

Viewing all 56 articles
Browse latest View live