Skip to main content

How to upload a Dataset to Splunk

How to upload a Dataset to Splunk

In this blog I will show how to upload a dataset to Splunk. The data once uploaded can be later analysed using Splunk Search Processing Language (SPL) and meaningful reports/dashboards can be generated. 

Steps to Upload a Dataset

1. Login to Splunk Console
2. Under Settings -> Click  Add Data
3. You can use different type of datasets. In our case we are using a comma separated values file (.csv) 
4. Click on Upload from My Computer



5. Click on "Select File" to select the csv file to upload.


6. Once selected it will upload the csv file to Splunk. The time it takes to upload is based upon the size of the file.


7. Set the Source Type of the File. In our case it is csv file, so I will leave it to the default.


8. Next Step is to select an index to store your data. An Index in Splunk is used as the repository of your data. You can create a new index or leave it to default. It is better to create separate indexes based on amount and type of data uploaded to your splunk server.


9. Click Next and Review your selections and click Submit.


10. Your csv data is uploaded successfully. You can now start searching your data and creating meaningful reports and dashboards.


Below is sample search on your uploaded data. 






Comments

Popular posts from this blog

Load records from csv file in S3 file to RDS MySQL database using AWS Data Pipeline

 In this post we will see how to create a data pipeline in AWS which picks data from S3 csv file and inserts records in RDS MySQL table.  I am using below csv file which contains a list of passengers. CSV Data stored in the file Passenger.csv Upload Passenger.csv file to S3 bucket using AWS ClI In below screenshot I am connecting the RDS MySQL instance I have created in AWS and the definition of the table that I have created in the database testdb. Once we have uploaded the csv file we will create the data pipeline. There are 2 ways to create the pipeline.  Using "Import Definition" option under AWS console.                    We can use import definition option while creating the new pipeline. This would need a json file which contains the definition of the pipeline in the json format. You can use my Github link below to download the JSON definition: JSON Definition to create the Data Pipeline Using "Edit Architect" ...

How to check progress of dbcc shrinkfile

  Query to check progress of dbcc shrinkfile select s.session_id,command,t.text,percent_complete,s.start_time from sys.dm_exec_requests s  CROSS APPLY sys.dm_exec_sql_text(s.sql_handle) t where command like '%Dbcc%'

Unix Bash Script

Bash Script that will recursively find all files under a directory and also under sub-directories. It will print all the files and also show the count for the words inside the files. count_words.sh for i in `find $1 -name "*" -type f` do wc -w $i done count_words.sh <directory_name>