Skip to main content

Gitlab installation steps on Redhat Linux

In this blog we will see the steps to install Gitlab on Redhat Enterprise Linux 6. I will be using the virtual machine "gitserver" that I have created on Google Cloud. You can use any server or VM running RHEL 6 and follow these steps.





Follow the below steps to install gitlab. Run these steps as root user.

# yum install -y curl policycoreutils-python openssh-server cronie



# lokkit -s http -s ssh 



# yum install postfix 



# service postfix start 

# chkconfig postfix on 




# curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh | sudo bash 



# EXTERNAL_URL="http://34.69.44.142" yum -y install gitlab-ee 



You will see a screen similar to below, once your gitlab installation is successful.



You can now access the gitlab console using the http or https url that you provided during the installation, i.e., http://<ip/server_name>

http://gitserver.localdomain.com

or 

http://34.69.44.142


When you open the console for the first time, you will have to set the password for default admin user, i.e., root



Comments

  1. Thanks for this blog. I have found some interesting blogs on google. You can check these blogs also which are related to technologies….

    Job Oriented Linux Training Institute in Delhi, NCR
    Job Oriented AWS Training Institute in Delhi, NCR

    ReplyDelete
  2. I have read all the comments and suggestions posted by the visitors for this article are very fine, We will wait for your next article so only. Thanks!

    Best Institute for AutoCAD Training Course in Delhi, NCR
    Certified JAVA Training Institute in Delhi with Placement

    ReplyDelete

Post a Comment

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'