Information Catering to Specific Customers

Information Catering to Specific Customers

Lifehacker has linked to me again, having discovered The Printable CEO™ Series Page; they had only linked to the original article before. I saw spurts of 870 pages served per hour, which for me is pretty good, and perhaps an indication that my site was choking with the old server config even with caching. In previous hits, I saw a maximum of maybe 200-300/hr. We’ll see how the numbers are tomorrow.

Anyway, it occured to me that Lifehacker took several months to become aware of the PCEO summary page; they must have just stumbled upon it again recently. No fault of theirs, of course: one problem is that my site is pretty eclectic, with productivity being only one of several topics I write about. I’ve thought that creating a separate blog for productivity might be the way to go, but on the other hand I like writing in the context of what else is going on with my life. It’s an important part of who I am, and I would rather not dillute the continuity of the writing either.

The solution occured to me in a sudden duhstorm:

Duh! Duh! Duh! The feed-by-category feature has been built into WordPress since 1.5.

Idea extension: I could conceivably make a new blog that sucks the RSS into itself directly, which could be the way of packaging other content streams, complete with targeted advertising. I’m sure this is what other people do, but hey it’s new to me. The question is: am I making my own spamblog by doing this? Is it evil if I’m using my own content, and packaging it in a more convenient form for a target audience to derive AdSense revenue?

I shall go to the 24-hour Walmart to buy some pork rinds and reflect on this matter. Any opinions are appreciated.


  1. pjh 18 years ago

    No, it’s not evil, but google frowns upon duplicate content.  If you go this route, you may want to exclude bots from indexing the duplicate pages.  Go carefully.

  2. Dave Seah 18 years ago

    pjh: That’s interesting! It seems like something non-trivial to detect, given the number of variations in whitespace and assemblage that would occur with even largely-similar text sources. I wish they’d apply this same technology to my SEARCH RESULTS, so I don’t end up getting the same damn “amazon store” results over and over again from a million different sites…

  3. Dave Seah 18 years ago

    I just randomly stumbled upon D. Keith Robinson’s The Great Domain Name Experiment, in which some of the issues that PJH referred to are mentioned. Very interesting.