I had a large text file (MySql database dump) with many instances of a domain name that I needed to change out for a website that’s being deployed under a new domain name. My first attempts to do a search & replace from a simple text editor failed and eventually crashed the editor. As well, it was not something I could repeat exactly the next time I had to migrate that same database again. The Linux utility ‘sed’ is a streaming editor that would easily handle large files and seemed like a simple solution. I ran it against the file twice, being careful to replace only the domain name as it appeared in URLs, and not where it appeared in any stored email addresses.
- www.my-old-domain.com —> www.objectclarity.com/~newhome
- ://my-old-domain.com —> ://objectclarity.com/~newhome
[code]sed -i.bak -e ‘s%www\.my-old-domain\.com%www.objectclarity.com/~newhome%g’ huge-mysql-dump.sql
sed -e ‘s%:\/\/my-old-domain\.com%://objectclarity.com/~newhome%g’ huge-mysql-dump.sql
Here’s a breakdown of what the commands mean:
(-i.bak) creates a backup of the original file.
(-e) tells sed that a script follows
The part in single quotes is my search-and-replace expression in the following form:
(s) defines a search & replace expression
(%) is my delimiter. “/” is the default delimiter but as I’m working with URLs, I don’t have to escape each character and results in a simpler espression.
(g) tells sed to replace *all* instances of my regex with the replacement string.
Each command ran in about 1 second on a 40Mb file and it did the job perfectly.