Advertisement
Advertisement


How to get the first line of a file in a bash script?


Question

I have to put in a bash variable the first line of a file. I guess it is with the grep command, but it is any way to restrict the number of lines?

2016/03/17
1
257
3/17/2016 2:33:57 PM

Accepted Answer

head takes the first lines from a file, and the -n parameter can be used to specify how many lines should be extracted:

line=$(head -n 1 filename)
2010/03/13
410
3/13/2010 6:58:06 PM

to read first line using bash, use read statement. eg

read -r firstline<file

firstline will be your variable (No need to assign to another)

2010/03/14

This suffices and stores the first line of filename in the variable $line:

read -r line < filename

I also like awk for this:

awk 'NR==1 {print; exit}' file

To store the line itself, use the var=$(command) syntax. In this case, line=$(awk 'NR==1 {print; exit}' file).

Or even sed:

sed -n '1p' file

With the equivalent line=$(sed -n '1p' file).


See a sample when we feed the read with seq 10, that is, a sequence of numbers from 1 to 10:

$ read -r line < <(seq 10) 
$ echo "$line"
1

$ line=$(awk 'NR==1 {print; exit}' <(seq 10))
$ echo "$line"
1

line=$(head -1 file)

Will work fine. (As previous answer). But

line=$(read -r FIRSTLINE < filename)

will be marginally faster as read is a built-in bash command.

2010/03/13

Just echo the first list of your source file into your target file.

echo $(head -n 1 source.txt) > target.txt
2016/11/04

The question didn't ask which is fastest, but to add to the sed answer, -n '1p' is badly performing as the pattern space is still scanned on large files. Out of curiosity I found that 'head' wins over sed narrowly:

# best:
head -n1 $bigfile >/dev/null

# a bit slower than head (I saw about 10% difference):
sed '1q' $bigfile >/dev/null

# VERY slow:
sed -n '1p' $bigfile >/dev/null
2016/10/20

Source: https://stackoverflow.com/questions/2439579
Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow
Email: [email protected]