Grabbing remote content

While I was putting together some code in order to grab today’s comic from various websites I experimented with several different functions to find the most efficient solution. In the end I ended up using file_get_contents which reads the entire remote file into a string and returns false if file doesn’t exist.

Compared to other functions like file, file_get_contents is by far a lot quicker and more stable/reliable. Grabbing complete web pages takes approximately 1 second on average. This having a cache function with only a few calls per day shouldn’t affect your servers performance at all.

In my coding experiment I needed to grab 17 characters from a page. This was achieved using the following function:

function getCode($source,$open,$close)
    $part1 = explode($open,$source);
    $part2 = explode($close,$part1[1]);
    return $part2[0];

Then you call the function and assign the value to a variable like this:

$remoteFile = substr(getCode(@file_get_contents(""), "src=\"/some/html/before", "\" title=\"and/some/after"), 0, 18);

Feel free to comment or ask for more information about how to set up a caching system and automatically maintain the temporary file (cached) šŸ™‚

About Author

One Comment on “Grabbing remote content”

  1. Hello,
    I want to grab intros from one online newsmagazine…not the whole txt, just the intro. This piece of text is contained into a specific html tag :

    How can i achive this with your code
    Thanks in advance

Comments are closed.