Friday, January 19, 2007

Http Cookies --Issues, Benefits and Inaccuracies

I found this article "Http Cookies Explained" by Andrew Nielsen rather interesting. It's always good to know the different pieces of working online, buying online or if you have an ecommerce site--what things are always a concern.

This is basically an informative piece, giving you a full story of Http Cookies-- from it's beginnings to today's issues and problems. What I found interesting is the data privacy issue due to the ability of the tracking of user behavior over multiple website -- and thus, the subject of legislation in the US, United Kingdom and other countries.

And to the issue of multiple users and the accuracy of the user profile -- I have to agree. If there are several earning family members using the same computer -- the profile will not be the same--and can definitely mess with your email marketing for new promotions.


HTTP Cookies Explained

by Andrew Nielsen


An HTTP cookie is a small piece of data. This data is sent by a web server when a user loads a page and then sent back unchanged to the server every time the user accesses the server. The purpose of this is to allow the server to identify the individual users requesting web pages from the server.

Cookies were invented to allow web servers to track and maintain information about the contents of users’ electronic shopping carts. Cookies allowed the server to uniquely identify which user was adding or removing items from a shopping cart and thereby to keep track of individual shopping carts. Without cookies, each interaction with the web server had to be treated as a separate event, and there was no obvious or accurate connection to a user’s previous actions.

Today, cookies are also used to keep track of user site preferences and user behavior across multiple websites. The latter is used primarily for advertising and involves tracking the user across multiple websites and thereby targeting ads to the user. Even when a user visits different websites served by different web servers, there may be ads on such websites which are served from one server. This way, the server providing the ads will be able to track the user.

A cookie can contain any (small) amount of data and will most often contain a string randomly generated by the server. There is thus no personal information stored in the cookie itself. The server may however store personal information and user preferences if the user types these in on the website. The cookie then allows the server to associate the information stored in the server with the user, when the user visits the website and the cookie is sent by the browser to the server.

Most browsers allow the user to decide if he or she will accept a cookie from a web server. If the user declines, this disables the functionality on the website utilizing cookies. If a website has implemented an electronic shopping cart using cookies, it will thus not be possible for the user to make a purchase without accepting the cookie.

Cookies may have an expiration date in which case the browser will not send the cookie to the server after expiration. Some cookies are defined as non persistent, in which case they are deleted when the browser is closed. Also, users may manually delete all or selected cookies.

While the data in the cookie itself is not personal and a server can only acquire personal information if the user explicitly discloses it, cookies are seen as a cause for concern over data privacy. The main reason for this is the tracking of user behavior over multiple websites. For this reason, cookies have been subject to legislation in United States, United Kingdom and other countries.

There are other areas of concern. If multiple users use the same computer, user profile and browser they will appear as one user to the web server. Also, cookies may be stolen, tampered with or an attacker may listen to the connection between a server and user and thereby snoop the cookie.

There are alternatives to using cookies each with their own drawbacks. One alternative involves tracking the user by the IP address from which the server receives the request for a webpage. This is inaccurate as multiple users may share the same IP address or proxy server. Other alternatives include HTTP authentication and embedding of information into URLs.

In this article we described what cookies are and what they may be used for. We have seen why they are a cause for concern over data privacy and we have touched on the subject of alternatives.

About This Author

Andrew Nielsen is a consultant an internet veteran who has spent the last many years helping internet companies become more profitable. Andrew is currently also helping business start ups and individuals make money online. Visit http://www.i-want-to-be-rich.com/
Read more...

Friday, January 12, 2007

Latent Semantic Indexing -- The Basic Understanding

Talk of Latent Semantic Indexing or LSI has been cropping up a lot in the last several weeks. I don’t know about you but those who write about it, hasn’t really helped me much in learning the concept Google is now emphasizing in their algorithm mix. Thus, I did some research and I believe I have a better handle of LSI. So here is my attempt at explaining LSI – with my article “Understanding Latent Semantic Indexing”

Now, I didn’t include everything, -- I didn’t want to bore you with how LSI prunes, and cuts the ambiquous words out of the mix before applying their mathematic matrix and thus, ranking to words and phases. But instead, I just gave you a flavor of what it is and how you can maybe adjust to the changes.

Even though LSI is a mathematical solution, if all is as it claims, it will give better search results. One thing to note, with the mathematical matrix used in ranking keywords – it will be a little more difficult to know what keywords or keyword phases will rank high or not, which I might add, may be Google’s purpose (wanting quality to prevail, while eliminating or at least diminishing the SEO blackhat tactics).



Understanding Latent Semantic Indexing
By Vickie J. Scanlon


There has been much talk lately of Latent Semantic Indexing – do in part to Google placing a higher relevancy to it in it’s algorithms, and consequently the hits and lower page rankings some webmasters encountered when Google instituted the change. If you are utilizing SEO or wanting to-- learning about LSI is important. What is Latent Semantic Indexing and how can it help or hurt your site? These are the questions that I will attempt to address in my article.


What is Latent Semantic Indexing?

Latent Semantic Indexing has been around for a while. According to the Wikipedia Encyclopedia, LSI was first patented in 1988. The LSI concept attempts to convert information from computer databases into normal-sounding human language. Understand? Yeah, my thought exacting. Let me compare and contrast and maybe, as for me, the understanding will be a little clearer.

With the old system of keyword search, the search engine would go through your web page and grab the keyword or keyword phases that were relevant – if no relevant keywords were present – the information on the page would be tossed aside and not be considered relevant—no in-betweens, and thus, the search engine algorithm would rank the page accordingly.

With LSI, an important step was added to the search engine algorithm – the examination of the page as a whole with consideration to the many words that were also common (semantically close) to the keywords. Thus, we now have, not only keyword and keyword phase searches, but also the added mix of words that are semantically close to the keywords and keyword phases. Surprisingly, this is how a human being looks at content, and mentally classifies a web page/document as a whole.

To put it all together, latent semantic indexing allows a search engine to determine what a page is about, without relying heavily on only keywords to draw the searcher to a web page. Even though keywords will still be relevant, the difference will be in how the search engines puts it all together. The search engines will not only analyze the keywords on a web page, but will consider the revelency of the keywords, as well as, the words that are semantically close in relation to the keywords and general theme of a page.

For example: web page for insurance – “health”, “auto”, “life” would be words related to the insurance theme.

Benefits and Drawbacks of Latent Semantic Indexing

With the change to LSI some will see benefits, while others will be having a hair-pulling awakening with a possible slip in page ranking.

Drawbacks

* For those who rely heavily on only one keyword, without any variation and without any additional words that relate directly to the theme of your page – you may see a drop.


Benefits

* With the increased relevancy and weight put on Latent Semantic Indexing, it can help to curtail “SEO black hat tactics”. With the old weighting system by Google, people could manipulate the search engines and obtain a rather decent page rank. With LSI, they will be a little hard pressed in determining which words or phases Google will place a higher or lesser relevancy in relation to the theme of the page.

*For those who have developed web pages that are filled with natural content, with keywords and keyword alternatives intermixed in their pages, they may not see much of a change. You may not consider it a benefit – but it certainly alleviates the headache and sleepless nights you may feel otherwise.


With the changes to LSI, I feel the emphasis for any webmaster will have to include:

* Quality content
* Keywords – keyword alternatives
* Mixed anchor text – and not just based on one keyword – but relevant to the content/theme of the web page.
* Variations of keyword/keywords and keyword phases – plural, singular or different tenses.
* Words that relate to the theme of the page


To conclude, the LSI introduction of Google to their search engines is, in my opinion, an attempt by Google to move the bar a little higher for quality content – not only emphasizing the most relevant, but also trying to emphasize the most useful information to it’s searchers. Where does that leave the webmaster? As always, when on the Internet -- expect change, and be ready to adapt and adjust to the changes when needed. And the other search engines – they may follow soon.


About the Author:
Vickie J Scanlon -- Visit her site at: myaffiliateplace.biz for tools, ebooks, "how to" affiliate/internet info, tech accessories, software and computers affiliate/small business person online.
Read more...

Friday, January 5, 2007

30 phút nữa thi...

Uh, tranh thu viet vo day vai dong`...

Tinh hinh la may hom nay bo dau dau hoai, hoc chang xong ma thi cung chang duoc, nhung ma khong sao ta se co gang het suc! Con 30 phut nua la thi tiep mon thu hai roi. dzay ma con 1 chut ly thuyet quan trong nua chua hoc, ngoi nhet mai ma chang vo hic hic... hy vong chut nua ong thay khong ra phan do. Chu gio dau dau wa' rui` khong the vao them mot chut kien thuc nao nua ca?

Thui len thi day! chuc ta may man nao`!


Read more...

Reflections and Predictions of 2006 and 2007

The new year is here. Throughout the year -- you have seen changes -- from Google Adsense and Adword changes, to social bookmarking and videocasting--and along with that, all the legal issues. Sometimes changes on the Internet fly by so fast that you don't have time to absorb what has taken place, or what is taking place. Thus, it's good to reflect -- not just because it is a new year, but also to make sure that you and your business is in tune to the Internet climate, or if you are being left behind.

Below is an article by Sharon Housley, "Reflections of 2006 and Predictions for 2007" which does some reflecting on the state of the world, before venturing into predicting what might be successful and what might not be successful in the new year for the Internet.

As to her reflections of the world in general for 2006 -- sadly, I have to agree.

As to the predictions -- you can agree or disagree. And if you been a little out of touch at what has been going on last year on the Internet -- it will give you time to catch up--and just see where you are at in the Internet World.


Reflections of 2006 and Predictions for 2007
By Sharon Housley

For the most part in 2006, the world escaped Natures wrath, but people were far less kind to their neighbors. 2006 is scarred not by the winds and oceans but by political turmoil across the globe. The Middle East quagmire is the epitome of how wrong things can go, with the war in Lebanon, infighting in Palestine, Iran's nuclear ambitions, and Iraq's sectarian violence the deepest scars of 2006 were self inflicted with man being his own enemy. Of course, the Middle East is not alone with it's own self-destruction. Genocide in Chad and Sudan show how truly intolerant the human race really is. North Korea's impatience and nuclear activity have disrupted Asia. In fact, few areas of the world were left unscathed by man's ambitions in 2006.

Again technology has brought the tragedies of war and the personal stories of families from the farthest corners of the earth, to the door steps of the west. Citizen journalism and Internet propagation has added a complex layer to the stories. The growth of YouTube, Blogs, Podcasting and RSS have personalized the media and given listeners and watchers a personal connection to the reports.

Technology has not only revolutionized news and how it is viewed, interactive technology is shaping the news. Wikipedia, while still a powerhouse in the search engines has a tarnished reputation due to relevancy issues. While persistence pays off for some, there are hints that not all are equal in the most popular social-wiki.

Looking back on last year's predictions, (http://www.small-business-software.net/2005-in-review.htm) sadly I see little has changed in the online world of SPAM and splogs. As feared, social networks and social bookmarking seem to be the next staging ground for spammers. We are already beginning to see the cracks in the ever popular Digg. The collective voice while powerful can be manipulated, bringing into question the usefulness of user generated content. As a result there is a strong indication that web credibility will continue to be an issue in 2007.

Transparency will likely continue to be an issue in 2007, with lack of legislation and no accountability for online journalistic integrity. Readers should not believe everything that they read. Traditional media will continue to struggle, creativity will prevail. Newspapers and traditional media will need to adapt in order to survive in 2007. We will likely see interesting new advertising models emerge in 2007, with video ads and sponsored podcasts taking hold as big media attempt to amortize these new communication mediums.

The world of online advertising saw some significant changes in 2006. Google tightened its grip on publishers in 2006. Enforcing strict new rules for displaying ads. While ad relevancy was critical in 2005, website quality became part of the formula in 2006. Google's change of heart and fall from grace with publishers encouraged new advertising models in 2006 with two new services PayPerPost and ReviewMe emerging. The new pay-per-post models match bloggers and advertisers. Bloggers, or online writers, are paid to review and write about advertiser projects, and like all new mediums, the road was not free of bumps and there were some transparency issues. Both services now require that bloggers or writers disclose that they are being paid for their comments. This new model will likely be a winner in 2007.

As the web becomes more cluttered it is obvious that personalized content will continue to grow, but filtering will play an even more important role. RSS feeds and user selected content will become more mainstream with more and more users opting to choose the content they receive. Companies hoping to stay competitive online and increase communication with potential customers will start to really understand the benefits behind RSS.

Venture Capital money returned to the web in 2006, and the 2.0 bubble continued to grow. While there will continue to be mergers and acquisitions with 2.0 companies in 2007, the activity will likely slow. The courts will likely become crowded in 2007. With big players like Google housing content on YouTube that is in clear violation of western copyright laws, victims will attempt to parlay the copyright infringements into cash in 2007.

Top 10 Winners Predicted for 2007

1. Content Filtering - Search 2.0 will be all about filtering
2. Personalized Search and Vertical Search will be a winner in 2007
3. Social Networks
4. RSS
5. iPod / iPhone / Video iPod / iTunes
6. Cyber Security
7. Going Green
8. PodSafe Music
9. Videocasting
10. Online Real Estate

Honorable Mentions

1. Web Services (Software as a Service)
2. Mobile Web
3. International Web
4. Local Web
5. Podcast Quality
6. Video Advertising


Top 10 Losers Predicted for 2007

1. Zune
2. Software Patents
3. Websites that Infringe on Copyrights
4. Video Conferencing
5. Social Wikis
6. Journalistic Accountability
7. YouTube in Court
8. Outsourcing
9. Personal Privacy
10. Web Legislation

More on 2007 Predictions - http://www.small-business-software.net/whats-hot-whats-not.htm
--------------------------------------------------------------------------------
About the Author:
Sharon Housley manages marketing for FeedForAll http://www.feedforall.com software for creating, editing, publishing RSS feeds and podcasts. In addition Sharon manages marketing for NotePage http://www.notepage.net a wireless text messaging software company.

----------------

Check out Podcasting Overview and Mechanics of Podcasting on myaffiliateplace.biz
Read more...