Skip to main content

Generate Creation Table Script using SQL Query


SQL Query

select 'create table ' || table_name || ' (' || column_name || ' ' || 

DATA_TYPE

|| '(' || 

case data_type 

when 'NUMBER' then data_precision

when 'VARCHAR2' then CHAR_LENGTH

when 'CHAR' then CHAR_LENGTH

end

 || '),' txt,table_name,column_id,data_type from user_tab_columns where table_name in (

 'EMPLOYEE',

'EMPLOYEE_POSITION'

 ) and column_id=1

union

select column_name || ' ' || 

DATA_TYPE

|| '(' || 

case data_type 

when 'NUMBER' then data_precision

when 'VARCHAR2' then CHAR_LENGTH

when 'CHAR' then CHAR_LENGTH

end

|| '),' txt,table_name,column_id,data_type from user_tab_columns out1 where table_name in (

 'EMPLOYEE',

'EMPLOYEE_POSITION'

) and column_id>1 and column_id<(

select max(column_id) from user_tab_columns where table_name=out1.table_name

)

union

select column_name || ' ' || 

DATA_TYPE

|| '(' || 

case data_type 

when 'NUMBER' then data_precision

when 'VARCHAR2' then CHAR_LENGTH

when 'CHAR' then CHAR_LENGTH

end

|| '));' txt,table_name,column_id,data_type from user_tab_columns out1 where table_name in (

 'EMPLOYEE',

'EMPLOYEE_POSITION'

) and column_id=(

select max(column_id) from user_tab_columns where table_name =out1.table_name

)

) order by table_name,column_id


Result

create table EMPLOYEE (EMPLOYEE_NO NUMBER(10),
EMPLOYEE_FIRST_NAME VARHCAR(30),
EMPLOYEE_LAST_NAME VARHCAR(30),
ADDRESS VARCHAR(100),
UPDATE_DATETME DATE);
create table EMPLOYEE_POSITION (EMPLOYEE_NO NUMBER(10),
COST_CENTRE VARCHAR(10),
UNIT VARCHAR(10),
ROLE VARCHAR(10),
UPDATE_DATETIME DATE);

Comments

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'

Unix Bash Script

Bash Script that will recursively find all files under a directory and also under sub-directories. It will print all the files and also show the count for the words inside the files. count_words.sh for i in `find $1 -name "*" -type f` do wc -w $i done count_words.sh <directory_name>