Showing posts with label Tech. Show all posts
Showing posts with label Tech. Show all posts
In my opinion, Git is a programmers program. It is fast, feature-rich yet intuitive, kind of like Google...there's a new treasure waiting to be found around every corner. The philosophy behind Git appeals to me; there's sure to be a lot to learn by appreciating it's architecture and studying its internals.

One of the few things that bugs me about Git is what happens when you finally realize that you have been committing unnecessary -massive- binaries such as database files and executables -especially if they are modified frequently.

The problem here is that if any single bit changes, the repository must make a new, albeit compressed copy of the whole mess, while of course keeping all previous copies archived for future reference... This situation sees the repository size quickly bloat.  While there are (Git)  methods to undo such mistakes, it's by no means easy or fun to do. At least for myself, I am not proficient enough with Git to fully understand the entire process.

I've been doing a fair bit of compiling lately and have found the output binaries are getting in the way of my workflow -sure there are means to avoid this such as having make files output to /bin directories and then explicitly reference them in a .gitignore file, but that's not practical in my situation.

In Windows environments things are actually a little easier as file extensions are almost always employed and they make filtering a snap. In Linux, however, things are a little trickier...

As such, I've put together a script that employs the file  utility in order to identify file MIME types, filter and then automatically add to them to .gitignore. It can run from within any directory of the repo in order to create per-directory .gitignore files, however executing it from the repo's root to create a top level .gitignore is probably good enough, and more maintainable.

If there's a need, I might implement a recursive maintenance utility, but for now it only deals with a single gitignore within the current working directory.

AutoGit -automatically filter files by their MIME type to avoid binary files, databases and other undesired content types.

Here's a direct link to the AutoGit bash script on GitHub.

$ Save it as a text file named autogit or something else flavorful.
$ chmod +x autogit
$ put it in the root of your git repo OR even better, place it in your PATH somewhere like /usr/share/bin
$ cd /your/git/repo
$ do autogit or ./autogit as appropriate
$ autogit will append the filepaths it found that have been identified and dump a report + git status
$ By default, the MIME types application/x-executable and application/octet-stream are enabled. Edit the script to add/modify desired MIME types.

Picture is of marshland adjacent to Pitt Lake near my place... I originally emailed this post in from my iPhone 4 which looked OK in the mobile version of Blogger, but had weird line breaking going on that I had to later undo from a browser...Just me testing.
While SQLITE provides the same functionally as the traditional result = (condition) ? value-if-true : value-if-false, it does not support such syntax. The general syntax is:



For example, here's the correct SQLITE ternary syntax to check for a NULL value before incrementing it:



If myValue is not NULL, set myValue to myValue +1, ELSE initialize it to 1.
The example below uses Google's OpenID API to request and validate the user's GMail address. The visitor is first directed to Google's sign-in page in order to verify your request to access their email. Once approved, the visitor is sent back to this page along with the necessary OpenID id's and signatures required to validate the user's credentials directly with Google.

Further to my Single Sign-on with Facebook LinkedIn GMail post, below is a single page example of how to authenticate, authorize and obtain user data such as an email address with Facebook's OAuth API written in PHP.


email) {

// Code to handle successful authentication. 
echo("
$user->name:  - $user->email  - authorized 

-signing in now...");
// Take user to main page.
echo "<script> setTimeout(function(){window.location='/'}, 3000);</script>";
}
else {

// Code to handle failure or refusal.

echo "Something went wrong while trying to authorize your account with Facebook.";
}
?>

One of the greatest improvements in online usability and user experience since the adoption of AJAX techniques that provide functionality such as Google Suggest, is the standardization of single sign-on. OpenID has been around for a long time and has helped pave the way to robust, secure 3rd party information exchange frameworks such as OAuth.

The purpose of these services is to allow sites to authenticate end users and optionally access their information such as email address, photo stream, Facebook profile, LinkedIn contacts or any other information the site may request and the user agrees to share.

For example, a few simple clicks and your visitor can sign-up to your site and provide you with a validated email address without completing yet another subscription form, without replying to yet another email confirmation message and without memorizing yet another password... All this is thanks to the fact that everyone already has an account with a large, trusted service provider such as GMail, Facebook, LinkedIn, Yahoo!, or even a Blogger account. These 3rd party service implement standardized protocols such as OpenID and OAuth to offerer decentralized, user-centric, authentication and authorization service that any site can leverage to enhance their user's experience.

Lets say your visitor is already signed into Facebook. If they are then presented with the option of signing in to your site using their Facebook account, all the user has to do is click a button to approve your site's request with Facebook. Assuming your visitor approves this access, Facebook then returns the requested information and provides you with a method to authenticate the details with Facebook directly to ensure that the visitor is who they say they are. That is, they are legitimate as far as Facebook is concerned and at a minimum they have an active Facebook account with an associated previously validate email address.

The more information you require the less inclined the user will be to approve your request. In fact, research indicates that an inverse correlation in user acceptance with information requested exists. So, unless your site provides some form of integration with the visitor's social data, then there is no need to request anything more than their email address, which is the norm.

The factor against widespread adoption of single sign-on and user information interchange has been the implementation complexity,  which has limited deployments in large part to competent system admins with above-average server-side programming capabilities. While specific implementations have become more straight forward to undertake, at the same time, there is a large amount of change occurring with the underlying standards and volumes of mostly technical literature that follow -This can make implementation at a practical level tough.

Facebook by far has done the best job implementing their solution with working examples that fit on a single page. See my Facebook Single Sign-on Example article. LinkedIn, on the other hand seems to have gone out of their way to make the process as obscure as possible. Google is somewhere in-between. While on the one hand Google is advancing standards development and providing proprietary enhancements with loads of examples and technical documentation, on they lack simple examples to facilitate practical implementations. In short, they simply offer too much choice.

Third parties are more interested in having their user data shared and embedded than they are with merely providing a free authentication service. This is why service providers such as LinkedIn make embeddeding user content easier than authenticating. Goole's OAuth implementation for example is a two step process -Approve request, share data. You have to do both. While most site admins are mainly interested in authentication and a valid email address, LinkedIn will not allow users to share it. Facebook on the other hand implements the more robust OAuth standard but permits you to do what you want - authenticate user/get email and optionally share share data.

In a nutshell, if you only want authentication/email OpenID is all you need. If you want to access user social data, OAuth is the more robust way to go. The problem is that each provider has different limitations and proprietary extensions -change is the only constant.

Deciding which technique to use depends mainly on what you require from the 3rd party service and to a lesser degree your architecture and administration capabilities and security level requirement.

My recent experience comes from implementing these authentication and authorization techniques on a service providing dynamic QR codes that I am currently designing.

I will create links to Single Sign-in Howto's for several different scenarios below:

Instructions for Single Sign-on using Facebook

Instructions for Single Sing-on with Google  

Server-Side Solutions - Most Flexible, Powerful and Secure:

Even if only a basic level of security is required your server must somehow know that the 3rd party service provider can vouch for the identity of the user. While this can be achieved in several different ways, all of these methods require some sort of server-side administration capability. You will need to place some code on the server for example, in order to process and validate the authentication and authorization process.

Things can get complicated because it is possible to only partly implement a solution which can leave your site susceptible to spoofing attacks (bad people pretending to be a trusted 3rd party) and each provider implements things a little differently, which only adds to the complexity and odds of someone making a mistake...

Web Browser Solutions - Easy, Fast but Least Powerful: 

Here 3rd party JavaScript APIs allow anyone that can add a <script>
tag to their site or even blog to 'authenticate' visitors and authorize their data to appear on your page.

These client side techniques simplify user data integration but make secure authentication more challenging...More on this later.

One requirement of the OAuth spec is the "Lexicographical Byte Value Ordering" of request parameters. The term Lexicographical is misleading as it implies a form of case-insensitive dictionary sorting whereas in practice, the spec implements an Ordinal sort. Specifically and more simply put -all the spec requires is sorting by ASCII character value. In case two parameters share the same name, then the ordering convention applies to both the parameter's name and value -i.e., concatenate key+value before sorting and call it a day.

To keep things simple, I found that expanding the parameter key/value pairs into strings of ASCII codes represented in hexadecimal format allows a simple asort($myByteArray, SORT_STRING) PHP builtin function to do the trick.

Pass in a delimited string of key=value pairs and this function will return a string with the parameters urlencoded, sorted Ordinally by ASCII values as per the OAuth spec:




Params in:
Msg=Hello World!, MSg=Hello World!, 1=one ,za=1, a= 2>1, B= 2 , c= hi there,f=50, f=25 , f=a, test=z, test=z1, test=z12, test=

Outputs:

1=one&B=2&MSg=Hello World!&Msg=Hello World!&a=2>1&c=hi there&f=25&f=a&f=50&test=&test=z12&test=z&test=z1&za=1


(Actual output with urlencoding)

1=one&B=2&MSg=Hello%20World%21&Msg=Hello%20World%21&a=2%3E1&c=hi%20there&f=25&f=a&f=50&test=&test=z12&test=z&test=z1&za=1

Here's the URL path locations of jQueryUI themes as hosted on Google's Code Distribution Network:

Base source
Black Tie - source
Blitzer - source
Cupertino - source
Dark Hive - source
Dot Luv - source
Eggplant - source
Excite Bike - source
Flick - source
Hot sneaks - source
Humanity - source
Le Frog - source
Mint Choc - source
Overcast - source
Pepper Grinder - source
Redmond - source
Smoothness - source
South Street - source
Start - source
Sunny - source
Swanky Purse - source
Trontastic - source
UI Darkness - source
UI Lightness - source
Vader - source


//
//Code to make it work:
//



One of the downsides to distributed cloud computing is the increased number of HTTP requests that are required in order to pull together a given web page. Each time an externally located resource is needed, the web browser must resolve the hostname and create a discrete HTTP socket in order to fetch it.

The client realizes this as an increasing performance penalty -a slow site.

Ideally in terms of performance, a single HTTP request would fetch a single resource that contained a complete HTML document. This can be helped along by embedding CSS style sheets and JavaScript along with the HTML document as appose to referencing them externally. In reality however, the fetched page more likely instructs the client browser to fetch several other linked resources in order to assemble the final document.

The prevalence of the increasing number of external resources is being driven by the rapid development and popularity of third-party libraries such as JQuery.

Google Libries API offers managed distributed code serving via google.load() along with their own search and other open-source APIs. By offloading this to Google's code distribution in the cloud, version control, file size and caching of third-party APIs can be optimized. Google Libraries API currently hosts the following resources:
  • Chrome Frame
  • Dojo
  • Ext Core
  • jQuery
  • jQuery UI
  • MooTools
  • Prototype
  • script.aculo.us
  • SWFObject
  • Yahoo! User Interface Library (YUI)
  • WebFont Loader


/*
 * Place the following code between <head></head> tags
 * of your blogger template.
 *
*/

<script src="http://www.google.com/jsapi" >
</script>

<script >
 
google.load("jquery", "1.4.2"); // google.load("jquery", "1") if a specific version is not required
google.load("jqueryui", "1.8");
</script>


Google requires an API key to use methods such as google.load() on non-Google domains. You can link directly to the libraries like this:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript">
</script>


Paths to each Google Library API can be found here.


Or load JSAPI with an API key as follows:

<script src="http://www.google.com/jsapi?KEY=yourAPIKeyHere">
</script>



Sign up for a Google API key here.







/**
     * Encode HTML tags as HTML Entities 
     * using jQuery
     *
     * Code takes raw HTML from first Textarea tag and 
     * places HTML entities into a second Textarea tag.
     */

function htmlEntities(){

  var htmlStr = $('#taInput').val();

  $('#taOutput').val($('
').text(htmlStr).html()); $('#taOutput').effect('pulsate'); }

Providing a user-friendly, secure and reliable remote shared data access solution is an essential IT service that will effect the day-to-day productivity and satisfaction of your staff. Whether you have a need to support one remote account or 10 thousand, one such solution that facilitates a user-friendly experience while minimizing complexity for Systems Administrators is Microsoft’s Remote Desktop Connection. This method allows personnel to log-in to their own personalized desktop from any remote location just as if they were sitting at their own PC in the office. As a type of “thin-client”, the remote PC only relays the keyboard, mouse and screen display while the work of processing application software and managing shared file storage remain with the corporate server. All that is required by the telecommuter is a basic PC and a high-speed Internet connection.

Every Microsoft Windows operating system includes the RDC client software and every Windows PC (except the Windows “Home” version) is capable of hosting at least one Remote Desktop connection. This means that even if your company does not have a full Microsoft sever it is still possible to implement a basic RDC solution with no additional software costs.
The underpinnings to RDC are provided by Microsoft’s Terminal Services. Entry-level Microsoft server operating systems allow for a maximum of two simultaneous connections beyond which additional Terminal Services licenses are required.

BENEFITS
  1. Remote personnel are provided with the same, familiar desktop and applications that they see while working from within the office
  2. A single-point of administration for security, user privileges and application software reduces cost and complexity for Systems Administrators.
  3. The office PC can be accessed from anywhere that an Internet connection is available.

    CONFIGURATION
    The most difficult aspect involves configuring the corporate network appliance such as an Internet router or firewall. A rule must be created that will forward inbound requests received on the public Internet Protocol (IP) address to the appropriate internal, private IP address of the Terminal Server or dedicated workstation.
    EXAMPLE CONFIGURATION
    RDC utilizes IP port number 3389 by default. For example, if the private IP address of the Terminal Server or dedicate workstation you want to connect to is 192.168.1.10, and the public IP address of the office is 70.68.47.137, then the following firewall/router rule is required: TCP Inbound 70.68.47.137:3389 --> 192.168.1.10
    This type of rule is commonly assigned under the “Port Forwarding” or “Applications” section of Internet firewalls.

    TESTING Ensure Terminal Services is running and accessible from within the office by opening a Remote Desktop Connection on an available PC and enter the PRIVATE IP address of the Terminal Server in the “Computer” field and click “Connect”.
    To test remotely, the forwarding rule must be in place. Open Remote Desktop Connection on the remote client PC and enter the PUBLIC IP address of the office in the “Computer” field and click “Connect”.
    SECURITY AND USABILITY
    Sometimes port 3389 can be blocked by Internet Service Providers. An alternative is to use Terminal Services Web (see TSWeb note 4) or a Virtual Private Network (see VPN note 5 ). A VPN solution in conjunction with a firewall provides more robust security and protection against denial-of-service and other attacks.
    NOTES
  1. To determine the public IP of the office:
    1. from the office, visit http://whatismyip.org/
  2. To display the private IP of the Terminal Server/Workstation:
    1. Click Start-->Run
    2. Type cmd [ press enter]
    3. Type ipconfig [press enter].
  3. To enable Remote Desktop on a Windows workstation (unavailable on MS Windows “Home” versions):
    1. Right-click “My Computer”
    2. Click “Properties”
    3. Click “Remote” tab
    4. Place a check in the “Allow Remote Connections” box
    5. Click “OK”
  4. TSWeb is an Active-X plug in for Internet Explorer that acts as a gateway to Terminal Services. This allows RDC to be carried over HTTP on port 80 rather than port 3389 (which can sometimes be blocked by ISPs). With a TSWeb solution, all the client requires is Internet Explorer rather than the Remote Desktop Connection client.
  5. When connecting over a VPN, the private IP of the Terminal Server/Workstation should be used in the “Computer” field when starting Remote Desktop Connection.
  6. IP addresses on the office side should be statically assigned so that they never change.
  7. In order for remote users to see the same desktop as they do when they log in locally, each user account must have the Terminal Services user profile path set in Active Directory to the same UNC path as their local profile.  
  8. RDC client software is also available for non-Windows clients such as Linux/Mac

© WAYNE DOUCETTE SEPTEMBER 2010



Dynamic Keyword Description Meta Tags to Improve Blogger SEO:


It took me awhile to research this technique, hopefully it'll save you some time. It creates meta tags dynamically in blogger for everything except archive paths, in which case it's left blank..

Open your blogspot account and proceed to the html template editor. Locate the <head>  tag and paste the following code immediately below it:



<!--start dynamic description keyword title meta tags for Blogger -->


<title><data:blog.pageTitle/></title> 

<b:if cond='data:blog.pageType != &quot;archive&quot;'> 
<meta expr:content='data:blog.pageName + &quot;, &quot; + data:blog.title' name='Description'/>
<meta expr:content='data:blog.pageName + &quot;, &quot; + data:blog.title' name='Keywords'/>
</b:if>


<!--end dynamic description keyword title meta tags for Blogger -->





This will generate unique description and keyword meta tags constructed of your post title + blog title setting. It's a good SEO practice to have the title tag on top, and if you operate a multi-issue blog where global meta tags wouldn't be appropriate, then Dynamic Meta Tags will help search engines properly index your blogger content.



UPDATE:

Here's my setup: It sets the Page Title to that of the Item Title or Blog Title. -this removes the appended Blog Title + Item Title default Blogger behavior.

It simply leaves meta description blank -I've found that Google is great at producing concise descriptions in most cases.

<b:if cond='data:blog.pageName == &quot;&quot;'> 
<title><data:blog.title/></title>
<b:else/> 
<title><data:blog.pageName/></title>
</b:if>
<b:if cond='data:blog.pageType != &quot;archive&quot;'>
<meta expr:content='data:blog.pageName + &quot;, &quot; + data:blog.title' name='Keywords'/>
<b:else/>
</b:if>


Valid HTML must be encoded into HTML entities before it can appear in its literal format on a web page. Here is a function written in JavaScript to encode HTML for displaying on a web page or blog. The function converts valid HTML into HTML entities:

Try it here:




Paste HTML Source:

Output with Encoded HTML Entities
Raw Output Preview
" gets encoded into &quot; > gets encoded into &gt; < gets encoded into &lt;

To place HTML or JavaScript code anywhere in a document, it's best to encode into html entities.
If you're looking for scripting access into client side JavaScript or Screen Scraping mechanisms to capture content as rendered in the browser, this will be of interest to you: I've started to notice Ruby now for about 3 years, stumbling onto Ruby on Rails only occasionally to find it dispereased sparsely, but herald proudly, within the development community. Until recently I've pretty much ignored Ruby and have stuck with traditional Lamp platforms, relying on PHP for server side scripting. Something I've wanted to do for a long time is to automate web browsing tasks. While I've used Perl's Mechanize library, my most pressing desire was to capture client-side JavaScript. My research uncovered two possible solutions. I found a firefox extension JSSh, a TCP/IP JavaScript Shell server for Mozilla, over at Ideas for Dozens: Telnet to JavaScript. JSSh acepts a telnet connection interface to the JavaScript Mozilla's environment. While JavaScript Window objects are passed as objects in JSSh, there seems to be limitations, as these objects do not seem to offer full inheritance of Window Objects. Basic Math, Array and other objects are present, but what I needed was the Window.setTimeout() method. Maybe I am not fully understanding the functionality of JSSs, but if it has more features, they're not well documented. For certain limited applications, JSSh offers great flexibility to solve problems by providing any telnet capable application access to JavaScript and is non the less very cool. My next tangent was found in Watir, an automated IE Screen Scraper, written in Ruby. With Ruby, and the libraries Watir and BeautifulSoup, I was able to automate a full function screen scraper in a couple hours (should have been minutes, had I already been familiar with Ruby) The Class has three functions: It opens a specific page, logs in if required and then monitors the contents of a specific HTML tag. When the content changes, it raises an alarm. On initialization:
  • Open desired web page in a hidden IE window
  • Login if redirected to login page
  • Hold the contents of a single specific HTML tag in a Class variable
On updates:
  • Wait a specified delay interval
  • Refresh the page
  • Raise alarm and open a visible IE window if content has changed
OK, I guess now I'm a Ruby fan too. I've been reading Ruby Documentation ever since. Backed by Apple, I'm sure Ruby on Rails is destined for even more popularity.
List of the most useful free PC and Internet tools -Functional, easy to use, free. Business and personal uses. The great thing about the Internet is the vast amount of information, and great services that are out there. Sadly, the bad thing about the Internet is the vast amount of information, and great services that are out there. How are you supposed to sift through it all to know what's useful? How do you know what's available other than by accidentally finding it? Here you will find not the most comprehensive list, but one of the most useful list.
The Essentials: Google Toolbar: Overview: One of the greatest widgets available to enhance the functionality of your Internet web browser. Best feature: Google Suggest --auto completes from Google keywords as you type in the search box. Works as a great dictionary spell checker too! Overall Coolness: 7 Functionality: 9 Easy if use: 7 Installation: 9 Platforms: All
Firefox: Overview: Arguably the best Internet web browser. Best feature: Endless add-on widgets; customizable. Caveats: Memory Intensive, default settings are not optimal e.g., page does not begin to render until the entire HTML document has been received -gives the impression of being slow relative to Internet Explorer that updates the screen as it receives the HTML. This can be changed easily through Firefox advanced options. Overall Coolness: 9 Functionality: 9 Easy if use: 7 Installation: 10 Platforms: All Google Documents: Overview: Edit, store and share Microsoft Office files online. Best feature: Collaborate and share documents online. Caveats: Does not support a wide range of file types Overall Coolness: 7 Functionality: 7 Easy if use: 7 Installation: N/A Platforms: All Google Pages Overview: Free web storage with online web page editing tools. Best feature: Free web storage. Caveats: Limited to 100MB file space Overall Coolness: 8 Functionality: 7 Easy if use: 7 Installation: N/A Platforms: All Wikipedia: Overview: Free online encyclopedia. Find anything or contribute. Best feature: Find anything you ever wanted to know. Caveats: Make sure you are aware of the source(s) Overall Coolness: 8 Functionality: 10 Easy if use: 6 Installation: N/A Platforms: N/A The Free Online Dictionary: Overview: Free online dictionary,. Best feature: Free with audible pronunciations. Caveats: Overall Coolness: 7 Functionality: 7 Easy if use: 7 Installation: N/A Platforms: N/A OpenOffice: Overview: Free MS Office Suite Replacement Best feature: Saves documents as a PDF Caveats: Saves files as OpenOffice type -be sure to set preferences to save as Microsoft Office compatible .doc ,ppt etc. Overall Coolness: 7 Functionality: 8 Easy if use: 7 Installation: 8 Platforms: All The Gimp: Overview: Free Photoshop Replacement Best feature: Full featured advanced photo editing equivalent to Adobe PhotoShop. Caveats: Add-ons takes some research Overall Coolness: 8 Functionality: 8 Easy if use: 6 Installation: 7 Platforms: All Picasa: Overview: Free photo management tool. Best feature: Manage albums on PC, then share online with secret URL, or make public. Caveats: Free online storage limited to 1GB. Overall Coolness: 8 Functionality: 8 Easy if use: 8 Installation: 8 Platforms: All More to come.... Blogger Google Search Extensions Specialized applications Irfanview Universal image viewer and converter. Most comprehensive file type support on the planet. CamStudio Screen Recorder - Capture screen video recordings in AVI/SWF formats. Open Source GPL. MediaCoder Universal Audio Video Transcoder - Convert and tweak any A/V file format. Open Source GPL Specialized tools and services for the web Blip.tv Online video storage and syndication. Zookoda E-Mail Marketing Free professional email subscription list service. Statcounter Invisible Web Tracker Website Analyitics and visitor counter Mambo/Joomla OpenSource Content Management System (CMS) Build, customize and administer your own websites with. Bets with Linux, Apache, mySQL and PHP. Business tools and services for the web

United States Patent and Trademark Office Patent and trademark search.

Google Base Upload products to database and advertise online free. Google Checkout Buy sell online. Free listings, low credit card rates Craigslist Free online classifieds -the best! Skype Free PC to PC calls. Low PC to phone calling rates. Get a phone number anywhere in the world for your virtual office.

In my eBay Bans Digital Items post, I promised to deliver a reasonable work-around to the many eBay sellers effected by that nasty policy change. As of now, I've got something that might do the trick... But first a word of caution: I'm not entirely certain that my methods comply with eBay's listing policies or not, because eBay is exceptionally unclear about what the new rules entail. In fact, I don't believe they know either, as they seem to be making them up, changing the goalposts, as they go. Meanwhile, please accept this as a solution in principle only; some fodder for the mix. As I said, I'm not 100% certain that what I am doing is acceptable to the eBay Gods or not. ***UPDATE***** In the comments section, I've been told that Pay Now buttons are forbidden; you can read my thoughts here. My solution involves the following key points: Post multiple auction items as usual -offering to deliver your electronic items on CD. In the listing, invite viewers to see all your items in your eBay store rather than wait for the CD in snail-mail. You can't say, "go here to get a digital download" but you are free to cross promote other items and your eBay store within the body of any auction item.
  1. Create a classified ad: The price is arbitrary, but eBay requires you to assign one. I'm not sure why, because a classified ad can be used to advertise anything -not just items for sale. I guess setting the price to the cost of the lowest item you plan to sell is reasonable. e.g., kind of like $5.95 (and up)
  2. Create a Promotions box to display only your class ad.
  3. In your eBay store, create a custom page to display a Promotions box you created above.
  4. Assign the custom page to be your home page.
The classified ad acts like your eBay store's digital download section -you can put as many buy now buttons or links to a website off-eBay. I had an additional problem because class ads aren't available on eBay.ca. To work around this, I have a two promo box home page -the class ad one is empty outside eBay.com so the box is invisible. In the second box, I just have a text link -see all my items on eBay.com This way, visitors will be directed to my store on eBay.com, then they'll see the class ad. By creating an Instant Download category in my store, I really pushed the envelope. This is not really required, but that's where I listed my class ad, and I did it for testing purposes only. Anyway, visit eBay.com and search for the example auction item here to see for yourself. While you're here, checkout the YouTube videos below: Some are pretty good. The menu button allows you to preview and choose different videos. Please post your feedback!

Syndicating content in a user friendly way is important. This is especially true if you operate a multi-issue blog where it's nice to be able to fragment content into separate channels.

Content syndicated into discrete RSS feeds allows your audience to choose which content channels they wish to subscribe to.



Default Blogger RSS feed paths:
http://wayne-doucette.blogspot.com/feeds/posts/default

http://wayne-doucette.blogspot.com/rss.xml

http://wayne-doucette.blogspot.com/atom.xml


Alternate Custom Blogger RSS Feed Paths:

Use these paths to syndicate content other than those provided by default in Blogger. Each example outputs an XML formatted feed -create as many alternate feeds as you wish:



Generate a Blogger RSS/XML feed from post label(s)
http://wayne-doucette.blogspot.com/feeds/posts/default/-/Tech/How-to
/*http://example.blogspot.com/feeds/posts/default/-/lable1/lable2 ...*/



Sort Blogger RSS/XML feeds like this:
http://wayne-doucette.blogspot.com/feeds/posts/default?orderby=updated
http://wayne-doucette.blogspot.com/feeds/posts/default?orderby=published


Blogger RSS Feed Usages

Use these RSS feeds as you like. Either through a third party-feed burning service like http://feedburner.com, embed them in your Blogger template, or create custom links to third-party subscription buttons to services like this
Add Blogger How-to to Google Reader for the Google Reader see Google's blog here.



Embedding an RSS feed in Blogger Template
Paste the following code in the < head> section of your Blogger template:

/*
< href="http://wayne-doucette.blogspot.com/feeds/posts/default/-/Tech/How-to?orderby=updated" rel="alternate" title="Blogger How-to Channel" type="application/rss+xml">
*/
According to trends available through search engine giant, Google Inc, Internet users around the world demonstrate more search interest in eBay than sex. Internet searches of eBay have been growing steadily, briefly outranking other popular search terms in November, 2007, including searches on God, money, love and sex.

By region, surfers in the UK, France, Germany and Australia looked to eBay, while those in India, Poland, the Netherlands, Denmark, and North America, prefer sex.

Dynamic Page QR Code

Popular Posts

My LinkedIn PingTag


View My Stats