Skip to main content

RESETLOGS always mandatory after executing Recovery using Backup Controlfile

In case of incomplete recovery of the database it is always required to open the database using RESETLOGS option. But what if I fired the "Recover Database Until Cancel using Backup Controlfile" command on a database which just got aborted.

Below test explains that resetlogs is always required after firing recover database ....... using backup controlfile.

A running database is brought down using ABORT option.

SQL> shutdown abort
ORACLE instance shut down.

[oracle]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Thu Jul 30 15:51:28 2014

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1620115456 bytes
Fixed Size                  2226952 bytes
Variable Size            1241515256 bytes
Database Buffers          369098752 bytes
Redo Buffers                7274496 bytes

SQL> alter database mount;

Database altered.

Now I execute the command to recover using backup controlfile.  

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1926317955 generated at 07/30/2015 15:20:20 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/product/11.2.0/dbhome_1/dbs/arch1_6153_864753447.dbf
ORA-00280: change 1926317955 for thread 1 is in sequence #6153

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/oracle/data/test01/system01.dbf'

ORA-01112: media recovery not started

I could have simply open the database and SMON would have done the crash recovery. 
In order to try that I canceled the recovery as the required redolog is not yet archived. But while opening the database, I get the below error.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open

So RESETLOGS is must to open the database now. So what should we do next? I checked the redo logs information and found that the log sequence to be applied is still not archived.

SQL> select sequence# from v$log;

 SEQUENCE#
----------
      6151
      6153
      6152


SQL> select member,group# from v$logfile where group#=(select group# from v$log where sequence#=6153);

MEMBER GROUP#
-------------------------------------------------------------------------------- -------------
/opt/oracle/data/test01/redo03.log  3


Simply pass the redolog file name while doing the recovery.

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1926317955 generated at 07/30/2015 15:20:20 needed for thread 1
ORA-00289: suggestion :
/opt/oracle/product/11.2.0/dbhome_1/dbs/arch1_6153_864753447.dbf
ORA-00280: change 1926317955 for thread 1 is in sequence #6153


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/opt/oracle/data/dwh01/redo03.log
Log applied.
Media recovery complete.

Now open the database using resetlogs.

SQL> alter database open resetlogs;

Database altered.

Remember it is always mandatory to open the database using RESETLOGS option if the above RECOVER command is used.


Comments

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'

Gitlab installation steps on Redhat Linux

In this blog we will see the steps to install Gitlab on Redhat Enterprise Linux 6. I will be using the virtual machine "gitserver" that I have created on Google Cloud. You can use any server or VM running RHEL 6 and follow these steps. Follow the below steps to install gitlab. Run these steps as root user. # yum install -y curl policycoreutils-python openssh-server cronie # lokkit -s http -s ssh  # yum install postfix  # service postfix start  # chkconfig postfix on  # curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh | sudo bash  # EXTERNAL_URL="http://34.69.44.142" yum -y install gitlab-ee  You will see a screen similar to below, once your gitlab installation is successful. You can now access the gitlab console using the http or https url that you provided during the installation, i.e., http://<ip/server_name> http://gitserver.localdomain.com or  http://34.69.44.142 When you open the c...