Tuesday, 4 August 2015

SEO (Searech engine optimization)

Search engine optimization (SEO) is the process of affecting the visibility of a website or a web page in a search engine's unpaid results - often referred to as "natural," "organic," or "earned" results. In general, the earlier (or higher ranked on the search results page), and more frequently a site appears in the search results list, the more visitors it will receive from the search engine's users. SEO may target different kinds of search, including image search, local search, video search, academic search,[1] news search and industry-specific vertical search engines.
As an Internet marketing strategy, SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience. Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

Contents

  • 1 History
    • 1.1 Relationship with Google
  • 2 Methods
    • 2.1 Getting indexed
    • 2.2 Preventing crawling
    • 2.3 Increasing prominence
    • 2.4 White hat versus black hat techniques
  • 3 As a marketing strategy
  • 4 International markets
  • 5 Legal precedents
  • 6 See also
  • 7 Notes
  • 8 External links

History

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[2] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words, and all links the page contains, which are then placed into a scheduler for crawling at a later date.
Site owners started to recognize the value of having their sites highly ranked and visible in search engine results, creating an opportunity for both white hat and black hat SEO practitioners. According to industry analyst Danny Sullivan, the phrase "search engine optimization" probably came into use in 1997. Sullivan credits Bruce Clay as being one of the first people to popularize the term.[3] On May 2, 2007,[4] Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona[5] that SEO is a "process" involving manipulation of keywords, and not a "marketing service."
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using meta data to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[6][dubious ] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[7]
By relying so much on factors such as keyword density which were exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, poor quality or irrelevant search results could lead users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.
By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[8]
In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web was created to bring together practitioners and researchers concerned with search engine optimisation and related topics.[9]
Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[10] Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.[11] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[12]
Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. Major search engines provide information and guidelines to help with site optimization.[13][14] Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website.[15] Bing Webmaster Tools provides a way for webmasters to submit a sitemap and web feeds, allows users to determine the crawl rate, and track the web pages index status.

Relationship with Google

In 1998, Graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub," a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[16] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.
Page and Brin founded Google in 1998.[17] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[18] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[19]
By 2004, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. In June 2007, The New York Times' Saul Hansell stated Google ranks sites using more than 200 different signals.[20] The leading search engines, Google, Bing, and Yahoo, do not disclose the algorithms they use to rank pages. Some SEO practitioners have studied different approaches to search engine optimization, and have shared their personal opinions.[21] Patents related to search engines can provide information to better understand search engines.[22]
In 2005, Google began personalizing search results for each user. Depending on their history of previous searches, Google crafted results for logged in users.[23] In 2008, Bruce Clay said that "ranking is dead" because of personalized search. He opined that it would become meaningless to discuss how a website ranked, because its rank would potentially be different for each user and each search.[24]
In 2007, Google announced a campaign against paid links that transfer PageRank.[25] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat nofollowed links in the same way, in order to prevent SEO service providers from using nofollow for PageRank sculpting.[26] As a result of this change the usage of nofollow leads to evaporation of pagerank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.[27]
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[28]
On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[29]
Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[30]
In February 2011, Google announced the Panda update, which penalizes websites containing content duplicated from other websites and sources. Historically websites have copied content from one another and benefited in search engine rankings by engaging in this practice, however Google implemented a new system which punishes sites whose content is not unique.[31] The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine,[32] and the 2013 Google Hummingbird update featured an algorithm change designed to improve Google's natural language processing and semantic understanding of web pages.

Methods

Getting indexed


Search engines use complex mathematical algorithms to guess which websites a user seeks. In this diagram, if each bubble represents a web site, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through," such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: percentages are rounded.
The leading search engines, such as Google, Bing and Yahoo!, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Two major directories, the Yahoo Directory and DMOZ both require manual submission and human editorial review.[33] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that are not discoverable by automatically following links.[34] Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;[35] this was discontinued in 2009.[36]
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[37]

Preventing crawling

Main article: Robots Exclusion Standard
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[38]

Increasing prominence

A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[39] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[39] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element[40] or via 301 redirects can help make sure links to different versions of the url all count towards the page's link popularity score.

White hat versus black hat techniques

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO.[41] White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.[42]
An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines[13][14][43] are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,[44] although the two are not identical.
Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.
Another category sometimes used is grey hat SEO. This is in between black hat and white hat approaches where the methods employed avoid the site being penalised however do not act in producing the best content for users, rather entirely focused on improving search engine rankings.
Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.[45] Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.[46]

As a marketing strategy

SEO is not an appropriate strategy for every website, and other Internet marketing strategies can be more effective like paid advertising through pay per click (PPC) campaigns, depending on the site operator's goals.[47] A successful Internet marketing campaign may also depend upon building high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.[48]
SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[49] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[50] It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.[51]

International markets

Optimization techniques are highly tuned to the dominant search engines in the target market. The search engines' market shares vary from market to market, as does competition.

Wednesday, 29 July 2015

How to Troubleshoot Internet Connection Problems


hand-plugging-in-ethernet-cable
Internet connection problems can be frustrating. Rather than mashing F5 and desperately trying to reload your favorite website when you experience a problem, here are some ways you can troubleshoot the problem and identify the cause.
Ensure you check the physical connections before getting too involved with troubleshooting. Someone could have accidentally kicked the router or modem’s power cable or pulled an Ethernet cable out of a socket, causing the problem.

Ping

One of the first things to try when your connection doesn’t seem to be working properly is the ping command. Open a Command Prompt window from your Start menu and run a command like ping google.com or ping howtogeek.com.
This command sends several packets to the address you specify. The web server responds to each packet it receives. In the command below, we can see that everything is working fine – there’s 0% packet loss and the time each packet takes is fairly low.
image
If you see packet loss (in other words, if the web server didn’t respond to one or more of the packets you sent), this can indicate a network problem. If the web server sometimes takes a much longer amount of time to respond to some of your other packets, this can also indicate a network problem. This problem can be with the website itself (unlikely if the same problem occurs on multiple websites), with your Internet service provider, or on your network (for example, a problem with your router).
Note that some websites never respond to pings. For example, ping microsoft.com will never results in any responses.

Problems With a Specific Website

If you’re experiencing issues accessing websites and ping seems to be working properly, it’s possible that one (or more) websites are experiencing problems on their end.
To check whether a website is working properly, you can use Down For Everyone Or Just For Me, a tool that tries to connect to websites and determine if they’re actually down or not. If this tool says the website is down for everyone, the problem is on the website’s end.
image
If this tool says the website is down for just you, that could indicate a number of things. It’s possible that there’s a problem between your computer and the path it takes to get to that website’s servers on the network. You can use the traceroute command (for example, tracert google.com) to trace the route packets take to get to the website’s address and see if there are any problems along the way. However, if there are problems, you can’t do much more than wait for them to be fixed.

Modem & Router Issues

If you are experiencing problems with a variety of websites, they may be caused by your modem or router. The modem is the device that communicates with your Internet service provider, while the router shares the connection among all the computers and other networked devices in your household. In some cases, the modem and router may be the same device.
Take a look at the router. If green lights are flashing on it, that’s normal and indicates network traffic. If you see a steady, blinking orange light, that generally indicates the problem. The same applies for the modem – a blinking orange light usually indicates a problem.
modem-lights
If the lights indicate that either devices are experiencing a problem, try unplugging them and plugging them back in. This is just like restarting your computer. You may also want to try this even if the lights are blinking normally – we’ve experienced flaky routers that occasionally needed to be reset, just like Windows computers. Bear in mind that it may take your modem a few minutes to reconnect to your Internet service provider.
If you still experience problems, you may need to perform a factory reset on your router or upgrade its firmware. To test whether the problem is really with your router or not, you can plug your computer’s Ethernet cable directly into your modem. If the connection now works properly, it’s clear that the router is causing you problems.
Image Credit: Bryan Brenneman on Flickr

Issues With One Computer

If you’re only experiencing network problems on one computer on your network, it’s likely that there’s a software problem with the computer. The problem could be caused by a virus or some sort of malware or an issue with a specific browser.
Do an antivirus scan on the computer and try installing a different browser and accessing that website in the other browser. There are lots of other software problems that could be the cause, including a misconfigured firewall.

DNS Server Problems

When you try to access Google.com, your computer contacts its DNS server and asks for Google.com’s IP address. The default DNS servers your network uses are provided by your Internet service provider, and they may sometimes experience problems.
You can try accessing a website at its IP address directly, which bypasses the DNS server. For example, plug this address into your web browser’s address bar to visit Google directly:
http://74.125.224.72
image
If the IP address method works but you still can’t access google.com, it’s a problem with your DNS servers. Rather than wait for your Internet service provider to fix the problem, you can try using a third-party DNS server like OpenDNS or Google Public DNS

Wednesday, 15 July 2015

DCN networking

What is DCN                                                                                                                                            

A dynamic circuit network (DCN) is an advanced computer networking technology that combines traditional packet-switched communication based on the Internet Protocol, as used in the Internet, with circuit-switched technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure.








Wednesday, 13 May 2015

DISADVANTAGES OF COMPUTER

Disadvantages of computer The use of computer has also created some problems in society which are as follows. Unemployment Wastage of time and energy Many people use computers without positive purpose. They play games and chat for a long period of time. It causes wastage of time and energy. Young generation is now spending more time on the social media websites like Facebook, Twitter etc or texting their friends all night through smartphones which is bad for both studies and their health. And it also has adverse effects on the social life Data Security The data stored on a computer can be accessed by unauthorized persons through networks. It has created serious problems for the data security. Computer Crimes People use the computer for negative activities. They hack the credit card numbers of the people and misuse them or they can steal important data from big organizations. Privacy violation The computers are used to store personal data of the people. The privacy of a person can be violated if the personal and confidential records are not protected properly. Health risks The improper and prolonged use of computer can results in injuries or disorders of hands, wrists, elbows, eyes, necks and back. The users can avoid health risks by using the computer in proper position. They must also take regular breaks while using the computer for longer period of time. It is recommended to take a couple of minutes break after 30 minutes of computer usage.

ADVANTAGES OF COMPUTER

Advantages of Computer Computer has made a very vital impact on society. It has changed the way of life. The use of computer technology has affected every field of life. People are using computers to perform different tasks quickly and easily. The use of computers makes different task easier. It also saves time and effort and reduces the overall cost to complete a particular task. Many organizations are using computers for keeping the records of their customers. Banks are using computers for maintaining accounts and managing financial transactions. The banks are also providing the facility of online banking. The customers can check their account balance from using the internet. They can also make financial transaction online. The transactions are handled easily and quickly with computerized systems.

Monday, 11 May 2015

CCNP Master

network engineers who aspire to plan, implement, verify and troubleshoot local and wide-area enterprise networks, the Cisco CCNP Routing and Switching certification program provides the education and training required to develop hands-on skills and best-practices.

Friday, 1 May 2015

CCNA


Cisco Certified Network Associate (CCNA) Routing and Switching is a certification program for entry-level network engineers that helps maximize your investment in foundational networking knowledge and increase the value of your employer's network. CCNA Routing and Switching is for Network Specialists, Network Administrators, and Network Support Engineers with 1-3 years of experience. The CCNA Routing and Switching validates the ability to install, configure, operate, and troubleshoot medium-size routed and switched networks.

Twitter Delicious Facebook Digg Stumbleupon Favorites More