Feeds:
Posts
Comments

This blog post is intended to look at the pros and cons of each approach to bike sharing.

Bike sharing started in the 1960s, when bikes were freely shared in Amsterdam. The difficulty was that bikes were often stolen.

Lock systems were then introduced – 2nd generation biking, and became relatively popular in some cities, particularly in Germany.  3rd generation bike sharing, in common use today, uses electronic methods to detect and track a particular bike.

Current costs for each bike in a public bike sharing scheme are US$3000 – US$5000, which has meant that only a limited number of cities have been able to front the investment required.

Peer to peer bike sharing is an alternative sharing scheme.  Using the power of location services on mobile phones, and the social network of existing bike owners, bikes can be added to the scheme at no cost other than communicating the message to their owners.  The bikes already exist as personal bikes, it is simply a matter of matching up the right person to the right bike at the right time.

However as of writing the 3rd generation sharing schemes still have some advantages:

  • Costs for end usage are zero for short trips (expensive for longer ones to recoup the investment however)
  • There is no need to return the bike to the original location

Advantages for peer to peer schemes

  • Costs for end usage are usually administered on a low daily basis, which means there is no dash for the nearest station
  • Near zero costs for adding new bikes to the scheme
  • Scalability, worldwide

Perhaps the best approach is to have a combined 3rd generation sharing scheme with a peer to peer model to enhance it.

Disadvantages of the 3rd generation model include

  • Bikes become mis-distributed, meaning a service provider must maintain the set of bikes
  • Lack of scalability to outside a given area
  • Inability to handle large volumes of traffic in one busy area (Waterloo station in London was a prime example).

Disadvantages of peer to peer model include

  • Bikes must be returned to their original location
  • Some hand-over is needed of the bike (although there are solutions in development for this)

You can now subscribe to this news page to receive e-mails whenever we send a post.   This should keep you more in the loop about progress on AtomJump products.

This post is not a solution, but more of an observation, and I welcome your suggestions.

On one of our projects we’re running subversion (incidently git could be a better tool – it is a distributed version control system), and I suddenly found that I couldn’t commit to the project.

svn: Commit failed (details follow):

svn: Server sent unexpected return value (400 Bad request) in response to OPTIONS request for ‘/svn/…[my path]’

I then remembered that I was on wireless broadband, and on plugging back the wired ethernet connection, it could suddenly commit once more.  What is causing this?  It isn’t a different IP – but something in the local set up seems to be causing the problem.  This was on Ubuntu linux, but I had a similar problem on a Vista client.

Update It turned out to be Vodafone which has a firewall at their end.  The workaround was to pass it through ssh – not as difficult as it sounds, but you need to check out the repository again with ssh as the transfer method used.  In fact, this is more secure, although it requires you to enter a password each time.

Build a free PC

Many consultants and people in shops won’t explain that it is perfectly possible to build a free Windows PC nowadays – with all the required software being free (with the exception of Windows itself). Of course, it depends on what you want to do with the PC, but generally a machine built on open source software is perfectly adequate for most tasks.

ZoneAlarm firewall

Clamwin Anti-virus for anti-virus protection

OpenOffice for documents and vector drawings, or Inkscape for vector graphics if you already have MS Office and don’t want a large download

GIMP for bitmap graphics manipulation

Thunderbird for e-mail

Firefox for web browsing

NVU for web-page creation

Like 80 million Windows users, I had AVG Ver 7 installed happily on my Windows XP PC. The software was free, functional and unobtrusive. From the end of this month this version will stop being updated, according to the software itself, and therefore an upgrade to Ver 8 is required.

My experience with Ver 8 to date has not been pleasant, and I can no longer recommend the AVG package as an anti-virus solution to my clients. The problems started after installation, when bringing up Windows Explorer ground to a halt, and a number of virus warning screens appeared. The software was taking over the machine with it’s checks, to the point of making it no longer function. The final straw occurred when I booted up Google search in Firefox – usually the fastest results on the Net, only to find that every result had a painfully slow anti-virus check being carried out alongside it – visibly destroying the display. This can be switched off in Tools->Add-ons, but the software didn’t even ask me if I wanted it included.

This release is a prime example of unnecessary feature-creep, and goes to show that you can make a mistake and lose your user base. I hope they correct the software, although it currently looks too bloated to make any modifications without a radical change in direction. Until this is fixed I am recommending the open source ClamWin Antivirus that seems quite limited in comparison but appears functional and doesn’t push itself onto you.

Sometimes the anti-virus industry appears to use scare tactics. The chances of clicking on a virus-laden link on Google are low if you take the common-sense approach of only clicking sensible looking links.

This was on Windows XP, and the Vista machine didn’t have a problem.  The wireless printer had connected through to the wireless LAN, but the XP machine was not detecting the printer and there was no particular error.

It turned out to be the Windows XP firewall that was the problem.  When I switched off the firewall the printer connected.  In the time I had on-site I couldn’t find how to open the Windows firewall to that specific network, so I installed another firewall instead and left the Windows firewall open.

Many people assume that a search engine by it’s very nature attracts users. However, if you own a new search engine you have to market that engine to your target audience. You can gain users through advertising, word-of-mouth, PR, link exhanges, or SEO (search engine optimization).

SEO on a search engine can be done in two ways.

  • You can target your homepage by aiming to meet searchers on the main engines looking for your type of search engine e.g. ‘real-estate usa search engine’
  • You can redistribute your content in a useful fashion to the main engines

We used the latter method to multiply out traffic to our local shopping site. It is in Google’s interest to have quality content from a smaller search engine merged in with it’s data-set, and they even make this service possible with Google Sitemaps. However care needs to be taken that there is still some added value in the method of searching once a user comes from Google’s page onto your own, or else none of the users will return to your engine, or even worse – you will be considered a spammer purely after advertising money.

Google’s webmaster tools provides an XML format to submit up to 50,000 URLs in one file, and more can be submitted by spreading your data-set across multiple files. It can take a few months for Google to index this quantity of pages.

The principle involves selecting a number of query terms from your data-set (in our case it was product names)

http://YOUR_SEARCH_URL?q=red+hat

http://YOUR_SEARCH_URL?q=garden+fork

The title of these pages needs to be relevant to the query terms. The descriptions of the products on your results pages will then be indexed on Google, and anyone finding you will be introduced to your engine as a second tier results page.

It is best if there is a call to action to modify the search using your more specific search facility.

Note: I feel a careful decision needs to be taken before embarking on such an effort. There are many people using this sort of technique to introduce a page of adverts which people then click through on. It needs to be for a genuinely unique set of data that is actually useful for an end user.

When it comes to search engine optimization we are often looking for a ‘quick fix’. I have just finished working in a UK business where we were bombarded with calls from companies claiming to “take you to the top spot on Google”. They achieved this by having a broad directory that had managed to become well placed for certain key terms. They were then selling that place semi-exclusively for a fee.

What makes their directory any more relevant in Google’s eyes than a well optimized individual page? I would predict that the number of in-bound links into their directory have built up over time, meaning that individual pages are highly placed. But – a word of warning – is the relevancy of the page suitable for your entry? Be careful of ‘guaranteed number of enquiries’ claims. The claim that we had was well over the top, and as such we instantly lost trust in the firm. Ask for the keywords and the directory name and explore the results for yourself once you are off the phone.

It will be interesting to determine if this type of market has been exploited in New Zealand as well.

A search engine optimization firm will usually take a different approach, and work with you to explore your target market (keywords), and how you might reach that market. A good summary of the area is available on Wikipedia, and points out the difference between white-hat and black-hat optimization. The decision to optimize any site can be a moral one – ask yourself ‘is this page genuinely useful for this type of search’ before creating it.