Skip to main content

Stages of Oracle Database.



When Oracle database is started, it goes through two different stages before it is finally opened for users/applications for queries or updates.
Below is the description of each stage and the commands that can be used to bring the database in these stages:

NOMOUNT
Command: STARTUP NOMOUNT

This is first step of opening the database.
Oracle will search for the parameter file with the name sp<sid>.ora (binary file) or init<sid>.ora (text based parameter file).
Oracle reads the spfile or pfile and an instance is created with the name specified in ORACLE_SID variable.

Instance consists of two things:
System Global Area or SGA, which is the memory allocated to Oracle Server as per the parameters defined in the initialization parameter file.
Background processes (SMON, PMON, DBWn, LGWn, CKPT etc.) which are mandatory for running a Oracle database.

MOUNT
Command: STARTUP MOUNT or
ALTER DATABASE MOUNT (If database is in NOMOUNT stage).
Oracle reads and writes the controlfile which are listed in the control_files parameter in the parameter file.
Oracle will give an error if any of the controlfile is missing.

OPEN
Command: STARTUP or
ALTER DATABASE OPEN (If database is in MOUNT stage).
This will ensure all the database files and redo log files that are listed in control file are present and have necessary read/write permissions.

OPEN READONLY
Command:
ALTER DATABASE OPEN READONLY used to open the database but the changes like updates, inserts or deletes would not be possible.

OPEN RESTRICTED
Command: STARTUP RESTRICTED
ALTER DATABASE OPEN RESTRICTED used to open the database for Administrators for patching or other database maintenance activities.


Comments

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'

Unix Bash Script

Bash Script that will recursively find all files under a directory and also under sub-directories. It will print all the files and also show the count for the words inside the files. count_words.sh for i in `find $1 -name "*" -type f` do wc -w $i done count_words.sh <directory_name>