Why small B2B companies should use PayPerClick

by chris bose on November 2, 2011

Many small B2B companies are wary of running PayPerClick campaigns because they do not want to waste their resources on a complex and difficult to understand process. They are afraid that the system is setup to make Google money and they cannot afford the time to learn how to change the default settings.

It is a difficult system to become expert in but its worth the effort because a well run PayPerClick campaign delivers these:

  • Valuable enquiries
  • Lots of business insight
  • Best converting niche keywords

You get valuable enquiries when you start thinking about how your potential customers refer to their problems online; when your ads reflect how your potential customers speak; and when your landing pages address their problems as specifically as possible.

You get lots of business insight because your tracking systems help you identify which keywords, ads and landing pages give the best conversions to sales.

You can use your newly discovered niche keywords to create new content for your existing site. The pages will then be better optimised for natural searches and your organic traffic will become more relevant.

In summary, your PayPerClick campaigns will begin to work for your business if you:

  • Understand your potential buyer
  • Target the keywords they are using
  • Track and measure everything

I specialise in getting quality enquiries for B2B clients from PayPerClick campaigns. Call me, Chris Bose, on 01488 674 203 for advice on how to get started.

Similarity Sets and Server Loads

by chris bose on July 15, 2011

In the beginning of this work, I did not delve into the quality of the similarity sets too deeply, hoping that the script would sort out any inadequacies.

Well quality in to get quality out. Obviously check the URL out to see if its relevant to your theme. Then check the log to see that the script is downloading the pages in the site, because sometimes it isn’t. This could be because of server issues or robots.txt files.

You can increase the quality of the runs by downloading entire sites with Web Dumper and serving the pages from your own server. This way you don’t piss off server owners by using their bandwidth every time you do a run, and you can tweak your own server to make sure the pages are being delivered in the time required. See the quality of your results go up!

Because the files are being accessed “locally” I can now run as many simultaneous runs as my bandwidth will cope with for a particular similarity set.

Old Applications, New PHP Version

July 12, 2011

WordPress 3.2 requires PHP version 5.2.4 or later. My hosting server recently upgraded to 5.3.6. This generated a lot of error messages from WebCalendar. To turn them off, edit your php.ini file found in /etc/php5/apache and turn display_errors and log-error to Off. Restart the apache server by ssh’ing to your server and running the following […]

Read the full article →

WordPress, Thesis and 1and1

July 1, 2011

A new WordPress install worked fine on an 1and 1 account, but when I activated Thesis I got a Server 500 error on all the admin pages even though I could see the website. So google search gave a page on the WordPress forums which said to add a php.ini file to the wp-admin file […]

Read the full article →

Advanced Manufacturing is not a niche

June 24, 2011

To produce useful results, a focused web crawler needs a good starting URL and a relevant similarity set. I began by thinking that advanced manufacturing was a good niche. But the crawler is telling me that it is no niche at all. The results have been very variable and poor quality and my interpretation of […]

Read the full article →

Splitting Large Text Files

June 15, 2011

Our focused web crawler produces large raw data text files up to 150 Gb. These have to be split to work in a Text editor like BBEdit whose top limit is 300Mb. You can use a utility like Split and Concat but this can only be used manually and has no batch facility. I wanted to […]

Read the full article →

Focused web crawlers: New Starting URL’s

June 10, 2011

Starting URL’s are first determined by identifying useful directories, PR sites and sometimes listening to your spam email. Thats how I found www.splut.com. But there will come a time when these avenues are exhausted and you have run all your target markets and similarity sets and you are stuck. Then I had an idea out […]

Read the full article →

Focused web crawlers: new target markets

June 6, 2011

A focused web crawler needs a similarity set and a starting URL to run. The similarity set is the set of URL’s that resemble the target market sufficiently to guide the crawl. The starting URL is where you begin. Initial similarity sets come from your client and prospect database. A useful way to determine new […]

Read the full article →

Focused Web Crawler: “Closed” Directories should be run first

May 25, 2011

Our focused crawler sometimes only finds results within the starting URL directory. This is most common with Applegate, Zibb and EngineeringTalk. I call these “closed” because they only produce results from within the domain. The results are accurate and precise, so this directory run could be used to find relevant targets for a similarity set […]

Read the full article →

How to install Sweeper/Ushahidi in Mac OS X Leopard

May 24, 2011

Ushahidi is free and open source software for information collection, visualization and interactive mapping. Swiftriver is an open source platform that helps users add context to realtime data. Sweeper is an application within the above framework. I am not sure of its capabilities but what caught my eye was this: “Automate the addition of context […]

Read the full article →