Feed aggregator

Managing Blockchain tables in Oracle Database 20c

Oracle’s multi model database 20c is a yearly short term release support, users and developers will benefit from the converged database approach of managing all data in one single database...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Using CNCF Sandbox Project Strimzi for Kafka Clusters on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI)

Pas Apicella - Sun, 2020-08-02 22:45
Strimzi a CNCF sandbox project provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. In this post we will take a look at how to get this running on VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and consume the Kafka cluster from a Springboot application.

If you have a K8s cluster that's all you need to follow along in this exampleI am using VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) but you can use any K8s cluster you have such as GKE, AKS, EKS etc.

Steps

1. Installing Strimzi is pretty straight forward so we can do that as follows. I am using the namespace "kafka" which needs to be created prior to running this command.

kubectl apply -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

2. Verify that the operator was installed correctly and we have a running POD as shown below
  
$ kubectl get pods -n kafka
NAME READY STATUS RESTARTS AGE
strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d22h

3. Next let's ensure we have a default storage class for the cluster as shown below.

$ kubectl get storageclass
NAME             PROVISIONER                    AGE
fast (default)   kubernetes.io/vsphere-volume   47d

4. Now at this point we are ready to create a Kafka cluster. For this example we will create a 3 node cluster defined in YML as follows.

kafka-persistent-MULTI_NODE.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: apples-kafka-cluster
spec:
  kafka:
    version: 2.5.0
    replicas: 3
    listeners:
      external:
        type: loadbalancer
        tls: false
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      log.message.format.version: "2.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}

Few things to note:
  • We have enable access to the cluster using the type LoadBalancer which means your K8s cluster needs to support such a Type
  • We need to create dynamic Persistence claim's in the cluster so ensure #3 above is in place
  • We have disabled TLS given this is a demo 
5. Create the Kafka cluster as shown below ensuring we target the namespace "kafka"

$ kubectl apply -f kafka-persistent-MULTI_NODE.yaml -n kafka

6. Now we can view the status/creation of our cluster one of two ways as shown below. You will need to wait a few minutes for everything to start up.

Option 1:
  
$ kubectl get Kafka -n kafka
NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS
apples-kafka-cluster 3 3 1/1 Running 0 6d22h

Option 2:
  
$ kubectl get all -n kafka
NAME READY STATUS RESTARTS AGE
pod/apples-kafka-cluster-entity-operator-58685b8fbd-r4wxc 3/3 Running 0 6d21h
pod/apples-kafka-cluster-kafka-0 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-1 2/2 Running 0 6d21h
pod/apples-kafka-cluster-kafka-2 2/2 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-0 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-1 1/1 Running 0 6d21h
pod/apples-kafka-cluster-zookeeper-2 1/1 Running 0 6d21h
pod/strimzi-cluster-operator-6c9d899778-4mdtg 1/1 Running 0 6d23h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apples-kafka-cluster-kafka-0 LoadBalancer 10.100.200.90 10.195.93.200 9094:30362/TCP 6d21h
service/apples-kafka-cluster-kafka-1 LoadBalancer 10.100.200.179 10.195.93.197 9094:32022/TCP 6d21h
service/apples-kafka-cluster-kafka-2 LoadBalancer 10.100.200.155 10.195.93.201 9094:32277/TCP 6d21h
service/apples-kafka-cluster-kafka-bootstrap ClusterIP 10.100.200.77 <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 6d21h
service/apples-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.100.200.58 10.195.93.196 9094:30735/TCP 6d21h
service/apples-kafka-cluster-zookeeper-client ClusterIP 10.100.200.22 <none> 2181/TCP 6d21h
service/apples-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 6d21h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/apples-kafka-cluster-entity-operator 1/1 1 1 6d21h
deployment.apps/strimzi-cluster-operator 1/1 1 1 6d23h

NAME DESIRED CURRENT READY AGE
replicaset.apps/apples-kafka-cluster-entity-operator-58685b8fbd 1 1 1 6d21h
replicaset.apps/strimzi-cluster-operator-6c9d899778 1 1 1 6d23h

NAME READY AGE
statefulset.apps/apples-kafka-cluster-kafka 3/3 6d21h
statefulset.apps/apples-kafka-cluster-zookeeper 3/3 6d21h 3 1/1 Running 0 6d22h

7. Our entry point into the cluster is a service of type LoadBalancer which we asked for as per our Kafka cluster YML config. To find the IP address we can run a command as follow using the cluster name from above.

$ kubectl get service -n kafka apples-kafka-cluster-kafka-external-bootstrap -o=jsonpath='{.status.loadBalancer.ingress[0].ip}{"\n"}'
10.195.93.196

Note: Make a not of this IP address as we will need it shortly

8. Let's create a Kafka Topic using YML as follows. In this YML we actually ensure we are using the namespace "kafka".  

create-kafka-topic.yaml

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
  name: apples-topic
  namespace: kafka
  labels:
    strimzi.io/cluster: apples-kafka-cluster
spec:
  partitions: 1
  replicas: 1
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824


9. Create a Kafka topic as shown below.

$ kubectl apply -f create-kafka-topic.yaml

10. We can view the Kafka topics as shown below.
  
$ kubectl get KafkaTopic -n kafka
NAME PARTITIONS REPLICATION FACTOR
apples-topic 1 1

11. Now at this point we ready to send some messages to our topic "apples-topic" as well as consume messages so to do that we are going to use a Springboot Application in fact two of them which exist on GitHub.


Download or clone those onto your file system. 

12.With both downloaded you will need to set the spring.kafka.bootstrap-servers with the IP address we retrieved from #7 above. That needs to be done in both GitHub downloaded/cloned repo's above. The file we need to edit for both repo's is as follows. 

File: src/main/resources/application.yml 

Example:

spring:
  kafka:
    bootstrap-servers: IP-ADDRESS:9094

Note: Make sure you do this for both downloaded repo application.yml files

13. Now let's run the producer and consumer Springboot application using a command as follows in seperate terminal windows. One will use PORT 8080 while the other uses port 8081.

$ ./mvnw spring-boot:run

Consumer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-producer$ ./mvnw spring-boot:run

...
2020-08-03 11:41:46.742  INFO 34025 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-03 11:41:46.754  INFO 34025 --- [           main] a.a.t.k.DemoKafkaProducerApplication     : Started DemoKafkaProducerApplication in 1.775 seconds (JVM running for 2.102)

Producer:

papicella@papicella:~/pivotal/DemoProjects/spring-starter/pivotal/KAFKA/demo-kafka-consumer$ ./mvnw spring-boot:run

...
2020-08-03 11:43:53.423  INFO 34056 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8081 (http) with context path ''
2020-08-03 11:43:53.440  INFO 34056 --- [           main] a.a.t.k.DemoKafkaConsumerApplication     : Started DemoKafkaConsumerApplication in 1.666 seconds (JVM running for 1.936)

14. Start by opening up the the Producer UI by navigating to http://localhost:8080/



15. Now let's not add any messages yet and also open up the Consumer UI by navigating to http://localhost:8081/



Note: This application will automatically refresh the page every 2 seconds to show which messages have been sent to the Kafka Topic

16. Return to the Producer UI http://localhost:8080/ and add two messages using whatever text you like as shown below.


17. Return to the Consumer UI http://localhost:8081/ to verify the two messages sent to the Kafka topic has been consumed



18. Both these Springboot applications are using "Spring for Apache Kafka


Both Springboot application use a application.yml to bootstrap access to the Kafka cluster

The Producer Springboot application is using a KafkaTemplate to send messages to our Kafka Topic as shown below.
  
@Controller
@Slf4j
public class TopicMessageController {

private KafkaTemplate<String, String> kafkaTemplate;

@Autowired
public TopicMessageController(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}

final private String topicName = "apples-topic";

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessageAddSuccess", "N");
return "home";
}

@PostMapping("/addentry")
public String addNewTopicMessage (@RequestParam(value="message") String message, Model model){

kafkaTemplate.send(topicName, message);

log.info("Sent single message: " + message);
model.addAttribute("message", message);
model.addAttribute("topicMessageAddSuccess", "Y");

return "home";
}
}

The Consumer Springboot application is configured with a KafkaListener as shown below
  
@Controller
@Slf4j
public class TopicConsumerController {

private static ArrayList<String> topicMessages = new ArrayList<String>();

@GetMapping("/")
public String indexPage (Model model){
model.addAttribute("topicMessages", topicMessages);
model.addAttribute("topicMessagesCount", topicMessages.size());

return "home";
}

@KafkaListener(topics = "apples-topic")
public void listen(String message) {
log.info("Received Message: " + message);
topicMessages.add(message);
}
}

In this post we did not setup any client authentication against the cluster for the producer or consumer given this was just a demo.





More Information

Spring for Apache Kafka

CNCF Sanbox projects

Strimzi
Categories: Fusion Middleware

Last Time You Tested Data Guard Was?

Michael Dinh - Sun, 2020-08-02 19:05

Data Guard is like a spare tire. Don’t wait to find there is no air when you need it.

[oracle@ol7-112-dg1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sun Aug 2 16:57:28 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@hawk
Password:
Connected to "hawk"
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 52 seconds ago)

DGMGRL> switchover to hawk_stby
Performing switchover NOW, please wait...
Operation requires a connection to database "hawk_stby"
Connecting ...
Connected to "hawk_stby"
Connected as SYSDBA.
New primary database "hawk_stby" is opening...
Operation requires start up of instance "hawk" on database "hawk"
Starting instance "hawk"...
Connected to an idle instance.
ORACLE instance started.
Connected to "hawk"
Database mounted.
Connected to "hawk"
Switchover succeeded, new primary is "hawk_stby"
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk_stby - Primary database
    hawk      - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 57 seconds ago)

DGMGRL> exit
[oracle@ol7-112-dg1 ~]$

==================================================

[oracle@ol7-112-dg1 upgrade19c]$ sqlplus / as sysdba @ status

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 2 16:59:45 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


Session altered.

SQL> select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2  ;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE    PROTECTION_MODE      FLASHBACK_ON
--------- ------------------------------ -------------------- ---------------- -------------------- ------------------
HAWK      hawk                           MOUNTED              PHYSICAL STANDBY MAXIMUM PERFORMANCE  YES

SQL> ;
  1  select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2*
SQL> select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6  ;

PID                           INST   THREAD# CLIENT_P PROCESS   STATUS       SEQUENCE#    BLOCK# DELAY_MINS
------------------------ --------- --------- -------- --------- ------------ --------- --------- ----------
3118                             1         1 LGWR     RFS       RECEIVING          169      3237          0
3151                             1         1 N/A      MRP0      APPLYING_LOG       169      3237          0

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@ol7-112-dg1 upgrade19c]$

==================================================

[oracle@ol7-112-dg1 ~]$ dgmgrl
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Sun Aug 2 17:00:03 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@hawk_stby
Password:
Connected to "hawk_stby"
Connected as SYSDBA.
DGMGRL> switchover to hawk
Performing switchover NOW, please wait...
Operation requires a connection to database "hawk"
Connecting ...
Connected to "hawk"
Connected as SYSDBA.
New primary database "hawk" is opening...
Operation requires start up of instance "hawk" on database "hawk_stby"
Starting instance "hawk"...
Connected to an idle instance.
ORACLE instance started.
Connected to "hawk_stby"
Database mounted.
Connected to "hawk_stby"
Switchover succeeded, new primary is "hawk"
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk      - Primary database
    hawk_stby - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 59 seconds ago)

DGMGRL> exit
[oracle@ol7-112-dg1 ~]$

==================================================

[oracle@ol7-112-dg1 upgrade19c]$ sqlplus / as sysdba @ status

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 2 17:02:06 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0


Session altered.

SQL> select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2  ;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE    PROTECTION_MODE      FLASHBACK_ON
--------- ------------------------------ -------------------- ---------------- -------------------- ------------------
HAWK      hawk                           READ WRITE           PRIMARY          MAXIMUM PERFORMANCE  YES

SQL> ;
  1  select NAME,DB_UNIQUE_NAME,OPEN_MODE,DATABASE_ROLE,PROTECTION_MODE,FLASHBACK_ON from v$database
  2*
SQL> select PID,inst_id inst,thread#,client_process,process,status,sequence#,block#,DELAY_MINS
  2  from gv$managed_standby
  3  where BLOCK#>1
  4  and status not in ('CLOSING','IDLE')
  5  order by status desc, thread#, sequence#
  6  ;

PID                           INST   THREAD# CLIENT_P PROCESS   STATUS       SEQUENCE#    BLOCK# DELAY_MINS
------------------------ --------- --------- -------- --------- ------------ --------- --------- ----------
3328                             1         1 LNS      LNS       WRITING            172      3252          0

SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@ol7-112-dg1 upgrade19c]$
__ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5f275c4c98454', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); });

Helm, Tiller, and Helm Charts with Kubernetes

Online Apps DBA - Sun, 2020-08-02 04:34

Containers are the future of deploying applications and, for now, Kubernetes is the one in the spotlight. Kubernetes is an opensource system to deploy, scale, and manage containerized applications anywhere. Helm is a package manager for Kubernetes (like yum and APT). Helm manages Kubernetes resource packages through charts, called Helm Charts. If you are interested […]

The post Helm, Tiller, and Helm Charts with Kubernetes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[AZ-900] Privacy, Compliance and Data Protection

Online Apps DBA - Sun, 2020-08-02 04:23

Microsoft provides a comprehensive set of documents detailing its commitment to a transparent and regulatory compliant way of working. Check out the blog post at k21academy.com/az90027 for a better understanding of Microsoft’s Security, Privacy, Compliance related SLA’s and practices. In this blog post, we will cover: o Compliance Terms and Requirements o Microsoft Privacy Statement […]

The post [AZ-900] Privacy, Compliance and Data Protection appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Terraform Installation Overview

Online Apps DBA - Sun, 2020-08-02 03:10

Terraform enables the users to define and provision a data center infrastructure of different cloud providers using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON but you cannot directly provision anything without installing it first on your machine. If you want to know how to install Terraform on your machine? […]

The post Terraform Installation Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Rolling Upgrades Using Physical Standby and physru_v3.sh

Michael Dinh - Sat, 2020-08-01 11:12

I am not going to reiterate what is already out there; however, what is out there covers 12.1.0.2.0 vs 19.3.0.0.0.

Here is the syntax and what seems to be ambiguous is target version.

$./physru.sh <sysdba user> <primary TNS alias> <physical standby TNS alias> <primary db unique name> <physical standby db unique name> <target version> 

NOTE: This is NOT all the steps for upgrade but only applicable steps when running physru.sh.

First physru execution completed successfully.

### First physru execution:
Verifies that Data Guard Broker is disabled and FRA is configured.
Creates a guaranteed restore point
Converts the existing Physical Standby to a Logical Standby

oracle@ol7-112-dg2:hawk:/sf_working/upgrade19c
$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.3.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Jul 30 20:06:30 2020 [0-1] Identifying rdbms software version
Jul 30 20:06:31 2020 [0-1] database hawk is at version 11.2.0.4.0
Jul 30 20:06:31 2020 [0-1] database hawk_stby is at version 11.2.0.4.0
Jul 30 20:06:31 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Jul 30 20:06:31 2020 [0-1] verifying backup location at hawk and hawk_stby
Jul 30 20:06:31 2020 [0-1] verifying available flashback restore points
Jul 30 20:06:31 2020 [0-1] verifying DG Broker is disabled
Jul 30 20:06:31 2020 [0-1] looking up prior execution history
Jul 30 20:06:31 2020 [0-1] purging script execution state from database hawk
Jul 30 20:06:31 2020 [0-1] purging script execution state from database hawk_stby
Jul 30 20:06:31 2020 [0-1] starting new execution of script

### Stage 1: Backup user environment in case rolling upgrade is aborted
Jul 30 20:06:31 2020 [1-1] database hawk location for backup controlfile is /u01/app/oracle/fast_recovery_area
Jul 30 20:06:32 2020 [1-1] database hawk_stby location for backup controlfile is /u01/app/oracle/fast_recovery_area
Jul 30 20:06:32 2020 [1-1] creating restore point PRU_0000_0003 on database hawk_stby
Jul 30 20:06:32 2020 [1-1] backing up current control file on hawk_stby
Jul 30 20:06:32 2020 [1-1] created backup control file /u01/app/oracle/fast_recovery_area/PRU_0003_hawk_stby_f.f
Jul 30 20:06:32 2020 [1-1] creating restore point PRU_0000_0003 on database hawk
Jul 30 20:06:32 2020 [1-1] backing up current control file on hawk
Jul 30 20:06:32 2020 [1-1] created backup control file /u01/app/oracle/fast_recovery_area/PRU_0003_hawk_f.f

NOTE: Restore point PRU_0000_0001 and backup control file PRU_0003_hawk_stby_f.f
      can be used to restore hawk_stby back to its original state as a
      physical standby, in case the rolling upgrade operation needs to be aborted
      prior to the first switchover done in Stage 4.

### Stage 2: Create transient logical standby from existing physical standby
Jul 30 20:06:32 2020 [2-1] verifying RAC is disabled at hawk_stby
Jul 30 20:06:32 2020 [2-1] verifying database roles
Jul 30 20:06:32 2020 [2-1] verifying physical standby is mounted
Jul 30 20:06:32 2020 [2-1] verifying database protection mode
Jul 30 20:06:32 2020 [2-1] verifying transient logical standby datatype support
Jul 30 20:06:33 2020 [2-2] starting media recovery on hawk_stby
Jul 30 20:06:39 2020 [2-2] confirming media recovery is running
Jul 30 20:06:39 2020 [2-2] waiting for apply lag to fall under 30 seconds
Jul 30 20:06:52 2020 [2-2] apply lag measured at 13 seconds
Jul 30 20:06:52 2020 [2-2] stopping media recovery on hawk_stby
Jul 30 20:06:53 2020 [2-2] executing dbms_logstdby.build on database hawk
Jul 30 20:06:59 2020 [2-2] converting physical standby into transient logical standby
Jul 30 20:07:03 2020 [2-3] opening database hawk_stby
Jul 30 20:07:05 2020 [2-4] configuring transient logical standby parameters for rolling upgrade
Jul 30 20:07:05 2020 [2-4] starting logical standby on database hawk_stby
Jul 30 20:07:10 2020 [2-4] enabling log archive destination to database hawk_stby
Jul 30 20:07:11 2020 [2-4] waiting until logminer dictionary has fully loaded
Jul 30 20:07:51 2020 [2-4] dictionary load 03% complete
Jul 30 20:08:01 2020 [2-4] dictionary load 62% complete
Jul 30 20:08:11 2020 [2-4] dictionary load is complete
Jul 30 20:08:11 2020 [2-4] waiting for apply lag to fall under 30 seconds
Jul 30 20:08:15 2020 [2-4] apply lag measured at 3 seconds

NOTE: Database hawk_stby is now ready to be upgraded.  This script has left the
      database open in case you want to perform any further tasks before
      upgrading the database.  Once the upgrade is complete, the database must
      opened in READ WRITE mode before this script can be called to resume the
      rolling upgrade.

NOTE: If hawk_stby was previously a RAC database that was disabled, it may be
      reverted back to a RAC database upon completion of the rdbms upgrade.
      This can be accomplished by performing the following steps:

          1) On instance hawk, set the cluster_database parameter to TRUE.
          eg: SQL> alter system set cluster_database=true scope=spfile;

          2) Shutdown instance hawk.
          eg: SQL> shutdown abort;

          3) Startup and open all instances for database hawk_stby.
          eg: srvctl start database -d hawk_stby

oracle@ol7-112-dg2:hawk:/sf_working/upgrade19c

Second physru execution FAILED.

### Second physru execution to switchover (APPLICATION BROWNOUT):
Executes a switchover making the upgraded standby database the primary database.
Executes a flashback of the original primary database to the guaranteed restore point from step 1 and shuts it down.

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.3.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 01:55:56 2020 [0-1] Identifying rdbms software version
Aug 01 01:55:56 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 01:55:57 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 01:56:00 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 01:56:02 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 01:56:03 2020 [0-1] verifying available flashback restore points
Aug 01 01:56:04 2020 [0-1] verifying DG Broker is disabled
Aug 01 01:56:05 2020 [0-1] looking up prior execution history
Aug 01 01:56:08 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 01:56:08 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 01:56:09 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 01:56:09 2020 [3-1] ERROR: hawk_stby is not at version 19.3.0.0.0
[oracle@ol7-112-dg2 upgrade19c]$

Second physru execution SUCCEEDED as 19.0.0.0.0 (base release) is used vs 19.3.0.0.0 (actual release).

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 02:48:40 2020 [0-1] Identifying rdbms software version
Aug 01 02:48:40 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 02:48:41 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 02:48:45 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 02:48:47 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 02:48:48 2020 [0-1] verifying available flashback restore points
Aug 01 02:48:49 2020 [0-1] verifying DG Broker is disabled
Aug 01 02:48:50 2020 [0-1] looking up prior execution history
Aug 01 02:48:53 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 02:48:53 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 02:48:54 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 02:48:54 2020 [3-1] database hawk_stby is at version 19.0.0.0.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 02:48:59 2020 [4-1] waiting for hawk_stby to catch up (this could take a while)
Aug 01 02:49:00 2020 [4-1] waiting for apply lag to fall under 30 seconds
Aug 01 02:49:12 2020 [4-1] apply lag measured at 9 seconds
Aug 01 02:49:16 2020 [4-2] using fast switchover optimizations

NOTE: A switchover is about to be performed which will incur a brief outage
      of the primary database.  If you answer 'y' to the question below,
      database hawk_stby will become the new primary database, and database hawk
      will be converted into a standby in preparation for upgrade.  If you answer
      'n' to the question below, the script will exit, leaving the databases in
      their current roles.
Are you ready to proceed with the switchover? (y/n): y

Aug 01 02:49:31 2020 [4-2] continuing
Aug 01 02:49:31 2020 [4-2] switching hawk to become a logical standby
Aug 01 02:49:39 2020 [4-2] hawk is now a logical standby
Aug 01 02:49:39 2020 [4-2] waiting for standby hawk_stby to process end-of-redo from primary
Aug 01 02:49:44 2020 [4-2] switching hawk_stby to become the new primary
Aug 01 02:49:45 2020 [4-2] hawk_stby is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 02:49:49 2020 [5-1] shutting down database hawk
Aug 01 02:50:03 2020 [5-1] mounting database hawk
Aug 01 02:50:12 2020 [5-2] flashing back database hawk to restore point PRU_0000_0003
Aug 01 02:50:15 2020 [5-3] converting hawk into physical standby
Aug 01 02:50:17 2020 [5-4] shutting down database hawk

NOTE: Database hawk has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

[oracle@ol7-112-dg2 upgrade19c]$

Third and Final physru execution completed successfully.

### Execute physru for the third and final time. 
Start redo apply
Prompt whether to switch back to original configuration
Remove guaranteed restore points

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 02:48:40 2020 [0-1] Identifying rdbms software version
Aug 01 02:48:40 2020 [0-1] database hawk is at version 11.2.0.4.0
Aug 01 02:48:41 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 02:48:45 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 02:48:47 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 02:48:48 2020 [0-1] verifying available flashback restore points
Aug 01 02:48:49 2020 [0-1] verifying DG Broker is disabled
Aug 01 02:48:50 2020 [0-1] looking up prior execution history
Aug 01 02:48:53 2020 [0-1] last completed stage [2-4] using script version 0003
Aug 01 02:48:53 2020 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 02:48:54 2020 [3-1] database hawk_stby is no longer in OPEN MIGRATE mode
Aug 01 02:48:54 2020 [3-1] database hawk_stby is at version 19.0.0.0.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 02:48:59 2020 [4-1] waiting for hawk_stby to catch up (this could take a while)
Aug 01 02:49:00 2020 [4-1] waiting for apply lag to fall under 30 seconds
Aug 01 02:49:12 2020 [4-1] apply lag measured at 9 seconds
Aug 01 02:49:16 2020 [4-2] using fast switchover optimizations

NOTE: A switchover is about to be performed which will incur a brief outage
      of the primary database.  If you answer 'y' to the question below,
      database hawk_stby will become the new primary database, and database hawk
      will be converted into a standby in preparation for upgrade.  If you answer
      'n' to the question below, the script will exit, leaving the databases in
      their current roles.
Are you ready to proceed with the switchover? (y/n): y

Aug 01 02:49:31 2020 [4-2] continuing
Aug 01 02:49:31 2020 [4-2] switching hawk to become a logical standby
Aug 01 02:49:39 2020 [4-2] hawk is now a logical standby
Aug 01 02:49:39 2020 [4-2] waiting for standby hawk_stby to process end-of-redo from primary
Aug 01 02:49:44 2020 [4-2] switching hawk_stby to become the new primary
Aug 01 02:49:45 2020 [4-2] hawk_stby is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 02:49:49 2020 [5-1] shutting down database hawk
Aug 01 02:50:03 2020 [5-1] mounting database hawk
Aug 01 02:50:12 2020 [5-2] flashing back database hawk to restore point PRU_0000_0003
Aug 01 02:50:15 2020 [5-3] converting hawk into physical standby
Aug 01 02:50:17 2020 [5-4] shutting down database hawk

NOTE: Database hawk has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

[oracle@ol7-112-dg2 upgrade19c]$

[oracle@ol7-112-dg2 upgrade19c]$ ./physru_v3.sh SYS hawk hawk_stby hawk hawk_stby 19.0.0.0.0
Please enter the sysdba password:

### Initialize script to either start over or resume execution
Aug 01 03:26:16 2020 [0-1] Identifying rdbms software version
Aug 01 03:26:17 2020 [0-1] database hawk is at version 19.0.0.0.0
Aug 01 03:26:18 2020 [0-1] database hawk_stby is at version 19.0.0.0.0
Aug 01 03:26:26 2020 [0-1] verifying fast recovery area is enabled at hawk and hawk_stby
Aug 01 03:26:29 2020 [0-1] verifying backup location at hawk and hawk_stby
Aug 01 03:26:31 2020 [0-1] verifying available flashback restore points
Aug 01 03:26:34 2020 [0-1] verifying DG Broker is disabled
Aug 01 03:26:36 2020 [0-1] looking up prior execution history
Aug 01 03:26:39 2020 [0-1] last completed stage [5-4] using script version 0003
Aug 01 03:26:39 2020 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 03:26:47 2020 [6-1] upgrade redo region identified as scn range [995261, 2453907]
Aug 01 03:26:47 2020 [6-1] enabling log archive destination to database hawk
Aug 01 03:26:51 2020 [6-1] starting media recovery on hawk
Aug 01 03:26:57 2020 [6-1] confirming media recovery is running
Aug 01 03:26:59 2020 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 03:27:20 2020 [6-1] monitoring media recovery's progress
Aug 01 03:27:32 2020 [6-3] recovery of upgrade redo at 07% - estimated complete at Aug 01 03:31:24
Aug 01 03:27:57 2020 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 03:30:06
Aug 01 03:28:21 2020 [6-3] recovery of upgrade redo at 42% - estimated complete at Aug 01 03:30:00
Aug 01 03:28:45 2020 [6-3] recovery of upgrade redo at 52% - estimated complete at Aug 01 03:30:10
Aug 01 03:29:10 2020 [6-3] recovery of upgrade redo at 61% - estimated complete at Aug 01 03:30:25
Aug 01 03:29:36 2020 [6-3] recovery of upgrade redo at 73% - estimated complete at Aug 01 03:30:27
Aug 01 03:30:00 2020 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 03:30:35
Aug 01 03:30:24 2020 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 03:30:42
Aug 01 03:30:51 2020 [6-3] recovery of upgrade redo at 96% - estimated complete at Aug 01 03:30:55
Aug 01 03:31:12 2020 [6-4] media recovery has finished recovering through upgrade

### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore hawk back to a primary database and
     hawk_stby back to a physical standby database.  If you answer 'n'
     to the question below, hawk will remain a physical standby
     database and hawk_stby will remain a primary database.

Do you want to perform a switchover? (y/n): y

Aug 01 03:31:26 2020 [7-1] continuing
Aug 01 03:31:36 2020 [7-2] waiting for apply lag to fall under 30 seconds
Aug 01 03:31:44 2020 [7-2] apply lag measured at 5 seconds
Aug 01 03:31:48 2020 [7-3] switching hawk_stby to become a physical standby
Aug 01 03:31:55 2020 [7-3] hawk_stby is now a physical standby
Aug 01 03:31:55 2020 [7-3] shutting down database hawk_stby
Aug 01 03:31:57 2020 [7-3] mounting database hawk_stby
Aug 01 03:32:08 2020 [7-3] starting media recovery on hawk_stby
Aug 01 03:32:15 2020 [7-3] confirming media recovery is running
Aug 01 03:32:16 2020 [7-3] waiting for standby hawk to process end-of-redo from primary
Aug 01 03:32:21 2020 [7-3] switching hawk to become the new primary
Aug 01 03:32:23 2020 [7-3] hawk is now the new primary
Aug 01 03:32:23 2020 [7-3] opening database hawk

### Stage 8: Statistics
script start time:                                           31-Jul-20 23:54:44
script finish time:                                          01-Aug-20 03:32:36
total script execution time:                                       +00 03:37:52
wait time for user upgrade:                                        +00 02:52:39
active script execution time:                                      +00 00:45:13
transient logical creation start time:                       31-Jul-20 23:54:46
transient logical creation finish time:                      31-Jul-20 23:55:14
primary to logical switchover start time:                    01-Aug-20 02:49:14
logical to primary switchover finish time:                   01-Aug-20 02:49:47
primary services offline for:                                      +00 00:00:33
total time former primary in physical role:                        +00 00:40:57
time to reach upgrade redo:
time to recover upgrade redo:                                      +00 00:03:44
primary to physical switchover start time:                   01-Aug-20 03:31:25
physical to primary switchover finish time:                  01-Aug-20 03:32:34
primary services offline for:                                      +00 00:01:09

SUCCESS: The physical rolling upgrade is complete

[oracle@ol7-112-dg2 upgrade19c]$

References:

Oracle11g Data Guard: Database Rolling Upgrade Shell Script (Doc ID 949322.1)

Data Guard physru_v3.sh Script Errors ORA-01403 ORA-06512 (Doc ID 2570572.1)

Oracle Database Rolling Upgrades

Agile Methodology and DevOps | DevOps and Agile Relationship

Online Apps DBA - Sat, 2020-08-01 04:13

DevOps has brought a lot of changes in the IT industry. By Combining Agile in DevOps, imagine how much faster and feasible delivery DevOps will provide to a business. Check out this blog at k21academy.com/devops15 to know more about Agile and how does it support DevOps. This blog post covers: What is Agile Agile Framework […]

The post Agile Methodology and DevOps | DevOps and Agile Relationship appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

I Will Buy You Lunch To Solve dbua -silent [FATAL] [DBT-20061]

Michael Dinh - Fri, 2020-07-31 14:22

Currently upgrading 11.2 to 19.3 database using dbua silent.

Run preupgrade.jar and no issues detected.

Run preupgrade_fixups.sql

Run dbua -silent error resulted with [FATAL] [DBT-20061]

Per Doc ID 2246770.1, ignore the error which may be hazardous.

Actually, I have already figured our the root cause; however, the method I used may not always be available for all environments.

Will post solution in a week.

HINT: probable cause is in the output which did not occur for first run.

$ env|grep HOME
OLD_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
NEW_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1
HOME=/home/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1

$ $OLD_HOME/jdk/bin/java -jar $NEW_HOME/rdbms/admin/preupgrade.jar TERMINAL TEXT FILE
==================
PREUPGRADE SUMMARY
==================
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade.log
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql
  /u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/postupgrade_fixups.sql

Execute fixup scripts as indicated below:

Before upgrade:

Log into the database and execute the preupgrade fixups
@/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql

After the upgrade:

Log into the database and execute the postupgrade fixups
@/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/postupgrade_fixups.sql

Preupgrade complete: 2020-07-31T18:50:38
oracle@ol7-112-dg2:hawk:/home/oracle
$
	
*******************************************************

$ sqlplus / as sysdba @/u01/app/oracle/cfgtoollogs/hawk_stby/preupgrade/preupgrade_fixups.sql

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jul 31 18:52:21 2020

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Executing Oracle PRE-Upgrade Fixup Script

Auto-Generated by:       Oracle Preupgrade Script
                         Version: 19.0.0.0.0 Build: 7
Generated on:            2020-07-31 18:50:36

For Source Database:     HAWK
Source Database Version: 11.2.0.4.0
For Upgrade to Version:  19.0.0.0.0

Preup                             Preupgrade
Action                            Issue Is
Number  Preupgrade Check Name     Remedied    Further DBA Action
------  ------------------------  ----------  --------------------------------
    1.  min_recovery_area_size    NO          Manual fixup required.
    2.  parameter_min_val         NO          Manual fixup recommended.
    3.  em_present                NO          Manual fixup recommended.
    4.  amd_exists                NO          Manual fixup recommended.
    5.  apex_manual_upgrade       NO          Manual fixup recommended.
    6.  tablespaces_info          NO          Informational only.
                                              Further action is optional.
    7.  exf_rul_exists            NO          Informational only.
                                              Further action is optional.
    8.  rman_recovery_version     NO          Informational only.
                                              Further action is optional.

The fixup scripts have been run and resolved what they can. However,
there are still issues originally identified by the preupgrade that
have not been remedied and are still present in the database.
Depending on the severity of the specific issue, and the nature of
the issue itself, that could mean that your database is not ready
for upgrade.  To resolve the outstanding issues, start by reviewing
the preupgrade_fixups.sql and searching it for the name of
the failed CHECK NAME or Preupgrade Action Number listed above.
There you will find the original corresponding diagnostic message
from the preupgrade which explains in more detail what still needs
to be done.
OL7-112-DG2:(SYS@hawk_stby:LOGICAL STANDBY>
	
*******************************************************
	
$ $NEW_HOME/bin/dbua -silent \
> -sid $ORACLE_SID \
> -oracleHome $ORACLE_HOME \
> -recompile_invalid_objects TRUE \
> -upgradeTimezone TRUE \
> -emConfiguration NONE \
> -upgrade_parallelism 2
Logs directory:  /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM
Performing Pre-Upgrade Checks...
============================
PRE- and POST- FIXUP ACTIONS
=============================
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/upgrade.xml
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/preupgrade_fixups.sql
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk/postupgrade_fixups.sql
[FATAL] [DBT-20061] One or more of the pre-upgrade checks on the database have resulted into error conditions that must be addressed before proceeding.
   ACTION: Refer to the pre-upgrade results location for details: /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk

*******************************************************

$ ls -l /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-07-31_06-53-41PM/hawk
total 692
-rw-r-----. 1 oracle oinstall      1 Jul 31 18:53 checksBuffer.tmp
-rw-r-----. 1 oracle oinstall  41134 Jul 31 18:53 components.properties
-rw-r-----. 1 oracle oinstall  15085 Jul 31 18:53 dbms_registry_extended.sql
drwxr-x---. 3 oracle oinstall     21 Jul 31 18:53 oracle
-rw-r-----. 1 oracle oinstall  14051 Jul 31 18:53 parameters.properties
-rw-r-----. 1 oracle oinstall   8584 Jul 31 18:53 postupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall   7884 Jul 31 18:53 preupgrade_driver.sql
-rw-r-----. 1 oracle oinstall  13082 Jul 31 18:53 preupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall 101702 Jul 31 18:53 preupgrade_messages.properties
-rw-r-----. 1 oracle oinstall 461660 Jul 31 18:53 preupgrade_package.sql
-rw-r-----. 1 oracle oinstall   5215 Jul 31 18:53 sqls.log
drwxr-x---. 3 oracle oinstall     24 Jul 31 18:53 upgrade
-rw-r-----. 1 oracle oinstall  15082 Jul 31 18:53 upgrade.xml

*******************************************************

DBUA Silent : Director Service Registration Error DBT-20060 DBT-08010 (Doc ID 2246770.1)	

Case 2 :
SEVERE: [FATAL] [DBT-20061] One or more of the pre-upgrade checks on the database have resulted into error conditions that must be addressed before proceeding.

./dbua -silent -sid <SID> -oracleHome $ORACLE_HOME -J-Doracle.install.dbua.validate.all=false

Query Statspack’s “SQL ordered by” sections over a time period

Pythian Group - Fri, 2020-07-31 09:26

In my previous blog post <link>, I presented the statspack_load_trends.sql script, which provides a high-level overview of workload dynamics on DB instance(s) over time. In this post, I’ll present the statspack_top_sqls.sql script, which returns SQL performance statistics by mining the Statspack repository. You can download it <here>.

The script provides the same SQLs with the same performance statistics as in Statspack’s various “SQL ordered by” sections. However, it does so by reporting the figures of all categories in a single line, and, more importantly, does it over a time range, considering each available consecutive pair of snapshots. Thus, it provides a way to quickly identify SQLs contributing the most to a particular “SQL ordered by” category over a time period. Alternatively, we can also check for specific SQL(s) behavior over time.

Its core logic is based on Oracle’s $ORACLE_HOME/rdbms/admin/sprepins.sql (StatsPack Report Instance) script, but it doesn’t rely on its existence to run. Similarly as for statspack_load_trends.sql, it queries the Statspack repository directly. It doesn’t create or use any (temporary) objects, not even the global temporary table that sprepins.sql uses when producing the report. We can also use it to analyze a repository imported from another DB and handles periods spanning instance restart(s).

Important note in case you want to compare the results with Statspack reports

Testing the script by comparing its output to regular Statspack reports (created by running $ORACLE_HOME/rdbms/admin/spreport.sql or sprepins.sql), I noticed that sometimes the numbers in the “SQL ordered by” sections didn’t match between the two. Examples include SQLs reported by my script, but not by Statspack reports. Or even Statspack reports reporting the same SQL (same hash value) multiple times in the same “SQL ordered by” section.
The root cause of those anomalies is described in the MOS note “Statspack Reports Show Different Results In “SQL ordered by …” When Taken At Different Times with Same Snapid (Doc ID 2258762.1)”: “When more than one statspack reports are taken repeatedly in the same session by non-perfstat user, some data might get mixed up in the temporary table used by the reporting script, and the result may get corrupted.”.

The problem was not connecting as the owner of the Statspack repository (usually PERFSTAT) when generating multiple consecutive snapshots looping over a snapshot range. The same was true also when creating a single Statpack report.
The takeaway is to always connect as the Statspack repository owner when running spreport.sql, especially if you use any helper scripts which generate Statspack reports for a series of snapshots.

Usage Starting the script

Let’s see the script in action analyzing a sample Swingbench run on a 2 node RAC database. The output is wide, so I suggest to spool it to a file for easier viewing/plotting:

SQL> spool top_sqls.txt
SQL> @statspack_top_sqls.sql

List SQL by [elapsed_time | cpu_time | buffer_gets | disk_reads | executions | parse_calls | max_sharable_mem | max_version_count | cluster_wait_time]:

Enter a value - default "elapsed_time" :

First, we specify by which category we want the SQLs to be ordered by. We can choose one of the above-listed possibilities, which are the same categories the “SQL ordered by” Statspack report’s sections displays. The script reports the same SQLs in the same order as they appear in the selected Statspack report category.

Suppose we want to order SQLs by “cpu_time”, and that the corresponding Statspack report lists 10 SQLs in the “SQL ordered by CPU” section. The script lists the same ones. However, the added benefit of the script is that it reports values, which the Statspack report doesn’t display. For example, the “SQL ordered by CPU” Statspack report section doesn’t display the “Physical Reads” statistic. Instead, the “SQL ordered by Elapsed time” section lists it. If a SQL isn’t qualified to display in the “SQL ordered by Elapsed time” section, we won’t get those values from the Statspack report.

Next, we provide the DBID and instance number we want to be analyzed. If we don’t provide an instance number, the script considers all which are present in the repository:

Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id    Inst Num DB Name      Instance     Host
----------- -------- ------------ ------------ --------------------------------
 1558102526        1 ORCL         orcl1        ol7-122-rac1.localdomain
 1558102526        2 ORCL         orcl2        ol7-122-rac2.localdomain


Enter DBID to analyze - default "1558102526" :
Enter instance number or "all" to analyze all instancs for DBID = 1558102526 - default "all" :

Finally, we specify the time range we’d like to analyze:

Enter begin time for report [DD-MON-YYYY HH24:MI] - default "30-APR-2020 10:54" : 15-FEB-2020 12:30
Enter end time for report [DD-MON-YYYY HH24:MI] - default "30-APR-2020 22:54" : 15-FEB-2020 13:00
Script output

Let’s check what the script output looks like. Because of the output width, I have broken it up in several sections. The whole file containing the spool, and the respective Statspack reports for comparison, can be obtained here .

Since the original spool file contains 280 lines, I’m showing an abbreviated version of the first 8 columns. The full output lists SQLs between each snapshot. Since we didn’t specify which instance number we’re interested in, both instances are considered. Additionally, we see it’s normal and expected, that consecutive snapshots don’t have consecutive numbers. Snapshots 4 and 21 are two consecutive snapshots on instance 1, as we can confirm by checking their snap time. The other columns are self-explanatory:

INSTANCE_NUMBER  B_SNAP_ID  E_SNAP_ID B_SNAP_TIME        E_SNAP_TIME        INTERVAL_MIN  DBTIMEMIN        AAS
--------------- ---------- ---------- ------------------ ------------------ ------------ ---------- ----------
1          1          2 15-FEB-20 12:30:00 15-FEB-20 12:40:00           10       2.30       0.23
&lt;removed 16 lines listing SQLs&gt;
2          3 15-FEB-20 12:40:00 15-FEB-20 12:50:00           10       1.60       0.16
&lt;removed 25 lines listing SQLs&gt;
3          4 15-FEB-20 12:50:00 15-FEB-20 13:00:00           10       1.55       0.15
&lt;etc&gt;
4         21 15-FEB-20 13:00:00 15-FEB-20 13:10:00           10       1.66       0.17

21 22 15-FEB-20 13:10:00 15-FEB-20 13:20:00 10 1.30 0.13

22 23 15-FEB-20 13:20:00 15-FEB-20 13:30:00 10 1.18 0.12

2 11 12 15-FEB-20 12:30:00 15-FEB-20 12:40:00 10 3.81 0.38

12 13 15-FEB-20 12:40:00 15-FEB-20 12:50:00 10 2.70 0.27

13 14 15-FEB-20 12:50:00 15-FEB-20 13:00:00 10 2.50 0.25

14 15 15-FEB-20 13:00:00 15-FEB-20 13:10:00 10 2.94 0.29

15 16 15-FEB-20 13:10:00 15-FEB-20 13:20:00 10 2.18 0.22

16 17 15-FEB-20 13:20:00 15-FEB-20 13:30:00 10 1.98 0.20

 

Let’s check an excerpt of the output for snapshots 1-2 and 2-3. Apart from the “HV” column (SQL old hash value), the other columns are self-explanatory. For blog post brevity, I’m showing only the first 10 SQLs per snapshot pair.

B_SNAP_ID  E_SNAP_ID         HV ELAPSED_TIME_SEC EXECUTIONS ELAPSED_PER_EXEC_SEC PERCENT_OF_DBTIME_USED CPU_TIME_SEC CPU_TIME_MS_PER_EXEC
--------- ---------- ---------- ---------------- ---------- -------------------- ---------------------- ------------ --------------------
        1          2 3565022785            80,55        483                  ,17                  58,47        14,63                 30,3
                     2319948924             55,8       5931                  ,01                  40,51         7,69                  1,3
                     1852190137            14,22       1024                  ,01                  10,32         7,75                 7,57
                     1113394757             8,17      12332                    0                   5,93         2,97                  ,24
                     4194254847              6,4        483                  ,01                   4,64          ,84                 1,73
                     1283549268             4,55        169                  ,03                    3,3          ,89                 5,28
                     2588369535             4,21         24                  ,18                   3,06         1,12                46,55
                     4212635381             4,18         24                  ,17                   3,04         1,09                45,39
                     4219272024             3,97       1396                    0                   2,88          ,86                  ,62
                     2835506982             3,74        173                  ,02                   2,71          ,57                 3,32
                     (..)                                                                                                                                         
        2          3 3565022785            46,93        956                  ,05                  48,79        18,87                19,73
                     2319948924            22,85      11550                    0                  23,75         7,52                  ,65
                     1852190137            15,35       2158                  ,01                  15,95        11,98                 5,55
                     1283549268             6,36        380                  ,02                   6,61         1,65                 4,33
                     2835506982                6        377                  ,02                   6,24         1,03                 2,72
                     1822227481             5,32       7742                    0                   5,53         1,26                  ,16
                     4194254847             4,69        957                    0                   4,87         1,22                 1,28
                     3463613875             4,61        380                  ,01                   4,79          ,62                 1,62
                     1113394757             4,07      25794                    0                   4,23         3,28                  ,13
                     4219272024             3,89       2945                    0                   4,04         1,43                  ,49
                     (..)
B_SNAP_ID  E_SNAP_ID         HV PHYSICAL_READS PHYSICAL_READS_PER_EXECUTION BUFFER_GETS GETS_PER_EXECUTION ROWS_PROCESSED ROWS_PROCESSED_PER_EXECUTION PARSE_CALLS
--------- ---------- ---------- -------------- ---------------------------- ----------- ------------------ -------------- ---------------------------- -----------
        1          2 3565022785           5860                        12,13      261329             541,05            483                            1         483
                     2319948924           4614                          ,78      205925              34,72          26467                         4,46           1
                     1852190137            394                          ,38      131100             128,03           1024                            1        1025
                     1113394757            336                          ,03      124291              10,08         172648                           14           1
                     4194254847            396                          ,82        7760              16,07            483                            1           1
                     1283549268            262                         1,55        6188              36,62            169                            1         169
                     2588369535             76                         3,17       13104                546             24                            1          24
                     4212635381             76                         3,17       13104                546            737                        30,71           1
                     4219272024            167                          ,12       19979              14,31           1396                            1           1
                     2835506982            255                         1,47        3579              20,69            173                            1         173
                                                                                                                                                                  
        2          3 3565022785           1138                         1,19      511742             535,29            956                            1         957
                     2319948924            487                          ,04      402425              34,84          51879                         4,49           0
                     1852190137            164                          ,08      274493              127,2           2158                            1        2157
                     1283549268            303                           ,8       13726              36,12            380                            1         380
                     2835506982            448                         1,19        7218              19,15            377                            1         377
                     1822227481            259                          ,03       23226                  3           7742                            1           0
                     4194254847            150                          ,16       14371              15,02            957                            1           0
                     3463613875            298                          ,78        5844              15,38            380                            1           0
                     1113394757              0                            0      260084              10,08         361116                           14           0
                     4219272024             76                          ,03       42277              14,36           2945                            1           0
B_SNAP_ID  E_SNAP_ID         HV MAX_SHARABLE_MEM_KB LAST_SHARABLE_MEM_KB MAX_VERSION_COUNT LAST_VERSION_COUNT DELTA_VERSION_COUNT CLUSTER_WAIT_TIME_SEC
--------- ---------- ---------- ------------------- -------------------- ----------------- ------------------ ------------------- ---------------------
        1          2 3565022785               55,39                55,39                 1                  1                   1                 13,67
                     2319948924               47,44                47,44                 1                  1                   1                  9,38
                     1852190137               55,38                55,38                 1                  1                   1                  2,65
                     1113394757               43,41                43,41                 1                  1                   1                  2,17
                     4194254847               47,64                47,64                 1                  1                   1                  1,26
                     1283549268               59,46                59,46                 1                  1                   1                   ,83
                     2588369535               55,38                55,38                 1                  1                   1                  2,21
                     4212635381               35,51                35,51                 1                  1                   1                  2,21
                     4219272024               27,42                27,42                 1                  1                   1                  1,15
                     2835506982               55,38                55,38                 1                  1                   1                   ,39
                                                                                                                                                       
        2          3 3565022785              113,27               113,27                 2                  2                   1                 15,91
                     2319948924               47,44                47,44                 1                  1                   0                     9
                     1852190137              113,25               113,25                 2                  2                   1                  1,32
                     1283549268              121,42               121,42                 2                  2                   1                  1,42
                     2835506982              113,27               113,27                 2                  2                   1                   ,72
                     1822227481                31,5                 31,5                 1                  1                   0                   1,5
                     4194254847               47,64                47,64                 1                  1                   0                  1,76
                     3463613875               59,63                59,63                 1                  1                   0                   ,81
                     1113394757               43,41                43,41                 1                  1                   0                   ,41
                     4219272024               27,42                27,42                 1                  1                   0                  1,77
B_SNAP_ID  E_SNAP_ID         HV CWT_PERCENT_OF_ELAPSED_TIME AVG_HARD_PARSE_TIME_MS MODULE                      SQL_TEXT
--------- ---------- ---------- --------------------------- ---------------------- --------------------------- ----------------------------------------------------------------
        1          2 3565022785                       16,97                        JDBC Thin Client            BEGIN :1 := orderentry.neworder(:2 ,:3 ,:4 ); END;
                     2319948924                        16,8                        New Order                   SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     1852190137                       18,61                                                    BEGIN :1 := orderentry.browseproducts(:2 ,:3 ,:4 ); END;
                     1113394757                       26,55                        Browse Products             SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     4194254847                       19,72                        New Order                   INSERT INTO ORDERS ( ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_
                     1283549268                       18,16                                                    BEGIN :1 := orderentry.newcustomer(:2 ,:3 ,:4 ,:5 ,:6 ,:7 ,:8 ,:
                     2588369535                       52,52                                                    BEGIN :1 := orderentry.SalesRepsQuery(:2 ,:3 ,:4 ); END;
                     4212635381                        52,9                        Sales Rep Query             SELECT TT.ORDER_TOTAL, TT.SALES_REP_ID, TT.ORDER_DATE, CUSTOMERS
                     4219272024                       28,85                        New Order                   INSERT INTO ORDER_ITEMS ( ORDER_ID, LINE_ITEM_ID, PRODUCT_ID, UN
                     2835506982                       10,53                                                    BEGIN :1 := orderentry.browseandupdateorders(:2 ,:3 ,:4 ); END;
                                                                                                              
        2          3 3565022785                       33,89                        JDBC Thin Client            BEGIN :1 := orderentry.neworder(:2 ,:3 ,:4 ); END;
                     2319948924                       39,39                        New Order                   SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     1852190137                        8,63                                                    BEGIN :1 := orderentry.browseproducts(:2 ,:3 ,:4 ); END;
                     1283549268                       22,26                                                    BEGIN :1 := orderentry.newcustomer(:2 ,:3 ,:4 ,:5 ,:6 ,:7 ,:8 ,:
                     2835506982                       12,06                                                    BEGIN :1 := orderentry.browseandupdateorders(:2 ,:3 ,:4 ); END;
                     1822227481                       28,14                        New Order                   SELECT CUSTOMER_ID, CUST_FIRST_NAME, CUST_LAST_NAME, NLS_LANGUAG
                     4194254847                       37,54                        New Order                   INSERT INTO ORDERS ( ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_
                     3463613875                       17,49                        Update Customer Details     INSERT INTO CUSTOMERS ( CUSTOMER_ID , CUST_FIRST_NAME , CUST_LAS
                     1113394757                        9,98                        Browse Products             SELECT PRODUCTS.PRODUCT_ID, PRODUCT_NAME, PRODUCT_DESCRIPTION, C
                     4219272024                       45,53                        New Order                   INSERT INTO ORDER_ITEMS ( ORDER_ID, LINE_ITEM_ID, PRODUCT_ID, UN

 

Final note

Oracle’s sprepins.sql script has a /*+ first_rows */ hint in the inline view containing analytical functions used to compute the current and previous row values from the stats$sql_summary table. The hint is present, but not enabled in statspack_top_sqls.sql. If due to some reason you need to re-enable it, just search for and enable it in the script (or use it’s alternative first_rows(0) ).

Categories: DBA Blogs

Generating and Storing a Report in the Database

Tom Kyte - Fri, 2020-07-31 08:06
>Hi Tom 's team, I am happy for your quick response my old question <b>https://asktom.oracle.com/pls/apex/asktom.search?tag=storing-and-accessing-reports-in-the-database-on-oracle-apex-of-version-20</b> I make a new question for my present problem. I listed those steps I created for that problem: a. I created a table named from file report_archive_tbl.sql on my database or directly on APEX. b. I created a process named store_blobquery_report when press Print button. Print button redirect to url I created for a report query. <code>declare l_report blob; begin l_report := apex_util.get_print_document ( p_application_id => :APP_ID, -- assign id of app or no need p_report_query_name => 'multiquery', -- <b>create name for a report query</b> p_report_layout_name => 'multiquery', -- <b>create name for a report layout</b> p_report_layout_type => 'rtf', -- <b>create a format for a report</b> p_document_format => 'pdf'-- <b>define format for a report<b> ); insert into report_archive ( filename, mimetype, report, created_date, created_by ) values ( 'Multi Query Search Results ('||to_char(sysdate,'DDMonYYYY')||')', 'application/pdf', l_report, sysdate, :USER ); end;</code> <b>c. When I want to click print report, one row were inserted into the database for each report you ran. </b>. That step while I pressed Print button, it only show that report downloaded but on a table of report_archive had no data. Thank you in advance ^_^.
Categories: DBA Blogs

Save compressed BLOB to OS file

Tom Kyte - Fri, 2020-07-31 08:06
Hello - We're doing following in oracle db - Extract comma delimited file from Siebel database using SQL query - we're using BLOB to store comma delimited file - Compressing BLOB data using UTL_compress LN_COMPRESS - Sending data to REST API using HTTP post REST service is erroring out due to Oracle compression and can't read oracle compressed data. Btw, its fine when I zip file manually using winzip and post using POSTMAN. REST API owner would like to know Oracle UTL_compress COMPRESS specification. Also there are asking to send OS file (oracle compressed file) Need to know following... 1)what specification Oracle is using for UTL_COMPRESS 2) How can I create file using Oracle COMRESSED blob? Thanks in advance
Categories: DBA Blogs

Archive Log Deletion Policy with a Standby Database

Hemant K Chitale - Thu, 2020-07-30 23:15
Previously, I have blogged about Archive Log Deletion Policy even when a Standby database is not present.

Here's a 19c example with a Standby Database

I first set it to "APPLIED ON ALL STANDBY" meaning that an ArchiveLog can be deleted only if the Primary has confirmed that the ArchiveLog (i.e. all the Redo in that ArchiveLog) has been applied on every Standby database configured for this Primary.

I then attempt to use the DELETE command to delete all recent ArchiveLogs.


oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri Jul 31 11:45:40 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> show all;

using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name ORCLCDB are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/product/19c/dbhome_1/dbs/snapcf_ORCLCDB.f'; # default

RMAN> configure archivelog deletion policy to applied on all standby;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored

RMAN>
RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> list archivelog all completed after "sysdate-1/24";

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
126 1 41 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf

128 1 42 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf

130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


RMAN>
RMAN> delete archivelog all;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf thread=1 sequence=43
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf thread=1 sequence=44
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
118 1 37 A 12-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_37_1036108814.dbf

119 1 38 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_38_1036108814.dbf

120 1 39 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_39_1036108814.dbf

125 1 40 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_40_1036108814.dbf

126 1 41 A 13-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf

128 1 42 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_37_1036108814.dbf RECID=118 STAMP=1045695910
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_38_1036108814.dbf RECID=119 STAMP=1045696247
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_39_1036108814.dbf RECID=120 STAMP=1045696315
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_40_1036108814.dbf RECID=125 STAMP=1045696378
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_41_1036108814.dbf RECID=126 STAMP=1047209331
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_42_1036108814.dbf RECID=128 STAMP=1047209610
Deleted 6 objects


RMAN>
RMAN> list archivelog all completed after "sysdate-1/24";

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


RMAN>


RMAN refused to delete ArchiveLogs 43 and 44 that were recently generated because they haven't been applied to the Standby database(s).

Once the Standby confirms that it has received and applied all the Redo in 43 and 44 :


2020-07-31T11:51:53.314269+08:00
PR00 (PID:3718): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_43_1036108814.dbf
2020-07-31T11:51:53.676981+08:00
PR00 (PID:3718): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_44_1036108814.dbf
PR00 (PID:3718): Media Recovery Waiting for T-1.S-45 (in transit)
2020-07-31T11:51:53.868134+08:00


The Primary can now Delete these ArchiveLogs (even though they haven't been backed up).


RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
130 1 43 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf

131 1 44 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_43_1036108814.dbf RECID=130 STAMP=1047210443
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_44_1036108814.dbf RECID=131 STAMP=1047210452
Deleted 2 objects


RMAN> list archivelog all;

specification does not match any archived log in the repository

RMAN>


If you want to prevent RMAN from Deleting ArchiveLogs that have not been backed up, you can either add the "backed up 'n' times to disk' clause to the DELETE command :


RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
134 1 45 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf

136 1 46 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf


RMAN> delete archivelog all completed after "sysdate-1/24" backed up 1 times to disk;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46

RMAN>


OR use a combination Archive Log Deletion Policy :


RMAN> configure archivelog deletion policy to applied on all standby backed up 1 times to disk;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

RMAN>
RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> sql 'alter system switch logfile';

sql statement: alter system switch logfile

RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf thread=1 sequence=47
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48

RMAN>


After I have confirmed that the Standby has applied the recent ArchiveLogs


PR00 (PID:4569): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_47_1036108814.dbf
2020-07-31T12:04:40.251269+08:00
PR00 (PID:4569): Media Recovery Log /opt/oracle/archivelog/STDBYDB/1_48_1036108814.dbf
PR00 (PID:4569): Media Recovery Waiting for T-1.S-49 (in transit)
2020-07-31T12:04:40.609327+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 49 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log


I try the DELETE again on the Primary


RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf thread=1 sequence=45
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf thread=1 sequence=46
RMAN-08138: warning: archived log not deleted - must create more backups
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf thread=1 sequence=47
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48

RMAN>
RMAN> backup as compressed backupset archivelog all;

Starting backup at 31-JUL-20
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=45 RECID=134 STAMP=1047210945
input archived log thread=1 sequence=46 RECID=136 STAMP=1047210952
input archived log thread=1 sequence=47 RECID=138 STAMP=1047211344
input archived log thread=1 sequence=48 RECID=139 STAMP=1047211353
input archived log thread=1 sequence=49 RECID=142 STAMP=1047211650
channel ORA_DISK_1: starting piece 1 at 31-JUL-20
channel ORA_DISK_1: finished piece 1 at 31-JUL-20
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/backupset/2020_07_31/o1_mf_annnn_TAG20200731T120730_hl7682or_.bkp tag=TAG20200731T120730 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 31-JUL-20

Starting Control File and SPFILE Autobackup at 31-JUL-20
piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_07_31/o1_mf_s_1047211651_hl768490_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 31-JUL-20

RMAN> delete archivelog all completed after "sysdate-1/24";

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=275 device type=DISK
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_48_1036108814.dbf thread=1 sequence=48
RMAN-08120: warning: archived log not deleted, not yet applied by standby
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_49_1036108814.dbf thread=1 sequence=49
List of Archived Log Copies for database with db_unique_name ORCLCDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
134 1 45 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf

136 1 46 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf

138 1 47 A 31-JUL-20
Name: /opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_45_1036108814.dbf RECID=134 STAMP=1047210945
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_46_1036108814.dbf RECID=136 STAMP=1047210952
deleted archived log
archived log file name=/opt/oracle/archivelog/ORCLCDB/1_47_1036108814.dbf RECID=138 STAMP=1047211344
Deleted 3 objects


RMAN>


At first, I cannot delete any of the ArchiveLogs.  Then, after I backup 45 to 49, I  am able to delete 45 to 47 but not 48 and 49 as they have not yet been applied to the Standby (although they have been backed up locally on the Primary server) 

Thus the Archive Log Deletion Policy protects from accidental deletion with the DELETE ARCHIVELOG command.

HOWEVER, the DELETE OBSOLETE command ignores this Policy.


Notes :
1.  Archive Log Deletion Policy has been available since 10g.  However, if you are on 10g or 11g, see Oracle Support Document ID 728053.1
2.  Instead of the "APPLIED ON [ALL] STANDBY" you can also use "SHIPPED TO [ALL] STANDBY" subclause.  This is useful if you have Standby databases that are [deliberately] lagging the Primary  -- .e.g a Standby that does the Apply 4 hours after the Redo is generated on the Primary.
Categories: DBA Blogs

DataPump crashed, now cant create new jobs

Tom Kyte - Thu, 2020-07-30 13:46
Hy Tom, a sheduled task that worked for month crashed sudenly. Its an EXPDP job transfering Data to an external SSD (USB3.0). The Disk still runs. The errors listet after a new test to export are like: (EXPDP SYSTEM/XXXX@....) ORA-31626: now job exists ORA-31638: Job SYS_EXPORT_SCHEMA_01 for User SYSTEM cannot be attached ORA-06512: in "SYS.DBMS_SYS_ERROR", Line 95 ...... I read some hints resulting in deleting all DataPump jobs (was too much I think).
Categories: DBA Blogs

encrypts file using PGP public

Tom Kyte - Thu, 2020-07-30 13:46
Hi, I don't know this is the right place or not but need to complete this task using PL/SQL. Actually I need to write PLSQL Code need to create a file and encrypt the file using PGP public key. I don't know how to achieve the second part of this task (file encryption) can I use dbms_crypto package for file encrypt and decrypt or need to some other rout. Our client already provides the PGP public key we don't need to create a PGP key need to use that key and encrypt/decrypt the file. Hope I clear enough my requirement. Thanks, Zahid
Categories: DBA Blogs

Determine The Column Font

Tom Kyte - Thu, 2020-07-30 13:46
Greetings , Please inform me how to change The Column font Type and size In Apex classic report? In legacy report Builder 6 it is easy Just Select The column and choose the font you want and the size for it how this can be accomplished in APEX ? Looking forward to hearing from you
Categories: DBA Blogs

Slow Delete with Cascade Delete

Tom Kyte - Thu, 2020-07-30 13:46
I'm trouble-shooting a very slow running delete statement in a stored procedure. The delete statement attempts to delete one record from 1 table. The record to be deleted contains a key field that is pervasive throughout the database. Call it Order_Header. Sixteen tables in the database have foreign keys containing cascade delete constraints referencing Order_Header (children tables). The database also contains 11 tables with foreign keys containing cascade delete constraints referencing the children tables (grand children). I had a similar situation with another large cascade delete that was resolved by adding indexes to the foreign keys. The only potential issue that I see in this case is that two of the grandchildren are also children of Order_Header. Stated a little differently, two of the children of Order_Header are also grandchildren of other children of Order_Header. Could this be the cause of the slow performing delete?
Categories: DBA Blogs

Need to to create index on a column for partial Data

Tom Kyte - Thu, 2020-07-30 13:46
Hi Tom, below is my table <code>create table invoice_fact( invoice_id number(7), INV_CHK_ID_DATE varchar2(1000), ACCOUNTING_DATE date, INV_AMOUNT number(12,4) );</code> and below is data INVOICE_ID ACCOUNTING_DATE INV_AMOUNT INV_CHK_ID_DATE 0000001 12-Mar-16 10000.77 0000001,19-Mar-2016,10000.77 0000002 13-Mar-16 10070.74 0000002,21-Mar-2016,10070.74 0000003 14-Mar-16 10124.7 0000003,16-Mar-2016,10124.7 0000004 15-Mar-16 10136.56 0000004,17-Mar-2016,10136.56 0000005 16-Mar-16 10190.35 0000005,23-Mar-2016,10190.35 0000006 17-Mar-16 10200.94 0000006,20-Mar-2016,10200.94 0000007 18-Mar-16 10255.44 0000007,20-Mar-2016,10255.44 0000008 19-Mar-16 10341.86 0000008,20-Mar-2016,10341.86 and my query is <code>select * from invoice_fact where to_date(substr(INV_CHK_ID_DATE,9,instr(INV_CHK_ID_DATE,',',1,2)-9),'DD-MON-RRRR') between '01-MAR-2016' and '01-MAR-2017'; </code> i have a date in inv_chk_id_date column i have to extract that date and need to apply filter in where clause. no of records in my table is more than 20 millions so if i apply filter like above query is taking very long time to execute, is there any way to speed up above query
Categories: DBA Blogs

Query performance difference in RAC instances

Tom Kyte - Thu, 2020-07-30 13:46
Hi The Oracle DB version I am working on is : Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Its a RAC with 2 instances. This is about a query which runs quickly (under 15s) under instance #1 but takes between 10-11 mins when it runs under Instance #2. The query is utilizing indexes as expected. User provided me the query and I got the sql_ids of its historical runs via DBA_HIST_SQLTEXT. That gave me 5 sql_ids of the same query run between 20-Jul and 28-Jul. <code>select sql_id from dba_hist_sqltext where sql_text like '%CLAIMTYP%claimtype%');</code> <b>Observation 1:</b> Via following query, I discovered that slower runs are on Instance #2, & faster ones on #1. <code>select distinct h.instance_number,trunc(h.sample_time) run_date,h.sql_exec_start, h.sql_id , h.session_id, h.sql_child_number, h.sql_full_plan_hash_value, to_char(trunc(elapsed_time_total/1000000/60/60/24),'9') || ':' ||to_char(trunc(MOD(elapsed_time_total/1000000/60/60, 24)),'09') ||':' || to_char(trunc(MOD(elapsed_time_total/1000000,3600)/60),'09') || ':' ||to_char(MOD(MOD(elapsed_time_total/1000000, 3600),60),'09') as "Time Taken [dd:hh:mi:ss]" from dba_hist_active_sess_history h,DBA_HIST_SQLSTAT s where s.sql_id=h.sql_id and h.instance_number=s.instance_number and h.sql_id in ('73b4smcjjk38s','bt8sw7vj42sp3',........);</code> <b>Observation 2:</b> In dba_hist_active_sess_history, under column DELTA_INTERCONNECT_IO_BYTES, I found several non-null values under Instance #2 run, which as per Oracle documentation means "Number of I/O bytes sent over the I/O interconnect over the last DELTA_TIME microseconds". Value of that column is NULL in all other cases where same query ran on Instance #1. And there are many rows in ASH for slower run sql_ids but faster runs have only couple of rows (each row indicating 10s interval). Does above observations indicate some issue with server of Instance #2? Thanks.
Categories: DBA Blogs

Extract xml node value with existnode filter condition

Tom Kyte - Thu, 2020-07-30 13:46
Hi Chris, Appreciate for your support. I have a requirement where want to return specific node values if a particular node exist. Example - for the below XML, my output will be 2 records because (Jobsdata/jobList/jobData/job/document) node exist 2 times and returning nodes are (job/document/fileID and job/directoryid ). O/p --> 100,D100 200,D200 Please find the XML table in the Live SQL link . My O/P should be as given below- <code>id fileID directoryid 1 100 D100 1 200 D200</code> Filter Criteria - I don't want to display the directoryid : D300 because there is no <document> tag. Hence, file ID and directoryid should be displayed as <document> tag available for those. Thanks,Asit
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator