tag:blogger.com,1999:blog-56105223839326324192024-03-26T23:37:41.244-07:00Computer Courses in Chandigarh Hello we provide all computer courses in Chandigarh and Mohali. contact us for more information.... Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-5610522383932632419.post-39060680056834639602015-08-04T00:47:00.002-07:002015-08-04T00:57:41.141-07:00SEO (Searech engine optimization)<div dir="ltr" style="text-align: left;" trbidi="on">
<b>Search engine optimization</b> (<b>SEO</b>) is the process of affecting the visibility of a website or a web page in a search engine's unpaid results - often referred to as "natural," "organic,"
or "earned" results. In general, the earlier (or higher ranked on the
search results page), and more frequently a site appears in the search
results list, the more visitors it will receive from the search engine's
users. SEO may target different kinds of search, including image search, local search, video search, academic search,<sup class="reference" id="cite_ref-aseo_1-0">[1]</sup> news search and industry-specific vertical search engines.<br />
As an Internet marketing
strategy, SEO considers how search engines work, what people search
for, the actual search terms or keywords typed into search engines and
which search engines are preferred by their targeted audience.
Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.<br />
<div class="toc" id="toc">
<div id="toctitle">
<h2>
Contents</h2>
</div>
</div>
<div class="toc" id="toc">
<ul>
<li class="toclevel-1 tocsection-1"><span class="tocnumber">1</span> <span class="toctext">History</span>
<ul>
<li class="toclevel-2 tocsection-2"><span class="tocnumber">1.1</span> <span class="toctext">Relationship with Google</span></li>
</ul>
</li>
<li class="toclevel-1 tocsection-3"><span class="tocnumber">2</span> <span class="toctext">Methods</span>
<ul>
<li class="toclevel-2 tocsection-4"><span class="tocnumber">2.1</span> <span class="toctext">Getting indexed</span></li>
<li class="toclevel-2 tocsection-5"><span class="tocnumber">2.2</span> <span class="toctext">Preventing crawling</span></li>
<li class="toclevel-2 tocsection-6"><span class="tocnumber">2.3</span> <span class="toctext">Increasing prominence</span></li>
<li class="toclevel-2 tocsection-7"><span class="tocnumber">2.4</span> <span class="toctext">White hat versus black hat techniques</span></li>
</ul>
</li>
<li class="toclevel-1 tocsection-8"><span class="tocnumber">3</span> <span class="toctext">As a marketing strategy</span></li>
<li class="toclevel-1 tocsection-9"><span class="tocnumber">4</span> <span class="toctext">International markets</span></li>
<li class="toclevel-1 tocsection-10"><span class="tocnumber">5</span> <span class="toctext">Legal precedents</span></li>
<li class="toclevel-1 tocsection-11"><span class="tocnumber">6</span> <span class="toctext">See also</span></li>
<li class="toclevel-1 tocsection-12"><span class="tocnumber">7</span> <span class="toctext">Notes</span></li>
<li class="toclevel-1 tocsection-13"><span class="tocnumber">8</span> <span class="toctext">External links</span></li>
</ul>
</div>
<h2>
<span class="mw-headline" id="History">History</span></h2>
Webmasters
and content providers began optimizing sites for search engines in the
mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed to do was to submit the address of a page, or <a class="mw-redirect" href="https://www.blogger.com/null" title="Uniform Resource Locator">URL</a>, to the various engines which would send a "<a href="https://www.blogger.com/null" title="Web crawler">spider</a>" to "crawl" that page, extract links to other pages from it, and return information found on the page to be <a class="mw-redirect" href="https://www.blogger.com/null" title="Index (search engine)">indexed</a>.<sup class="reference" id="cite_ref-2">[2]</sup>
The process involves a search engine spider downloading a page and
storing it on the search engine's own server, where a second program,
known as an indexer,
extracts various information about the page, such as the words it
contains and where these are located, as well as any weight for specific
words, and all links the page contains, which are then placed into a
scheduler for crawling at a later date.<br />
Site owners started to recognize the value of having their sites
highly ranked and visible in search engine results, creating an
opportunity for both <a href="https://www.blogger.com/null" title="White hat (computer security)">white hat</a> and <a class="mw-redirect" href="https://www.blogger.com/null" title="Black hat hacking">black hat</a> SEO practitioners. According to industry analyst Danny Sullivan,
the phrase "search engine optimization" probably came into use in 1997.
Sullivan credits Bruce Clay as being one of the first people to
popularize the term.<sup class="reference" id="cite_ref-3">[3]</sup> On May 2, 2007,<sup class="reference" id="cite_ref-4">[4]</sup> Jason Gambert attempted to trademark the term SEO by convincing the Trademark Office in Arizona<sup class="reference" id="cite_ref-5">[5]</sup> that SEO is a "process" involving manipulation of keywords, and not a "marketing service."<br />
Early versions of search algorithms relied on webmaster-provided information such as the keyword <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Meta_tag" title="Meta tag">meta tag</a>, or index files in engines like ALIWEB.
Meta tags provide a guide to each page's content. Using meta data to
index pages was found to be less than reliable, however, because the
webmaster's choice of keywords in the meta tag could potentially be an
inaccurate representation of the site's actual content. Inaccurate,
incomplete, and inconsistent data in meta tags could and did cause pages
to rank for irrelevant searches.<sup class="reference" id="cite_ref-6">[6]</sup><sup class="noprint Inline-Template" style="white-space: nowrap;">[<i><span title="The material near this tag is possibly inaccurate or nonfactual. (October 2012)">dubious</span> <span class="metadata">– discuss</span></i>]</sup>
Web content providers also manipulated a number of attributes within
the HTML source of a page in an attempt to rank well in search engines.<sup class="reference" id="cite_ref-7"><a href="https://en.wikipedia.org/wiki/Search_engine_optimization#cite_note-7">[7]</a></sup><br />
By relying so much on factors such as <a href="https://en.wikipedia.org/wiki/Keyword_density" title="Keyword density">keyword density</a>
which were exclusively within a webmaster's control, early search
engines suffered from abuse and ranking manipulation. To provide better
results to their users, search engines had to adapt to ensure their results pages
showed the most relevant search results, rather than unrelated pages
stuffed with numerous keywords by unscrupulous webmasters. Since the
success and popularity of a search engine is determined by its ability
to produce the most relevant results to any given search, poor quality
or irrelevant search results could lead users to find other search
sources. Search engines responded by developing more complex ranking
algorithms, taking into account additional factors that were more
difficult for webmasters to manipulate.<br />
By 1997, search engine designers recognized that <a href="https://en.wikipedia.org/wiki/Webmaster" title="Webmaster">webmasters</a> were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.<sup class="reference" id="cite_ref-infoseeknyt_8-0">[8]</sup><br />
In 2005, an annual conference, AIRWeb, Adversarial Information
Retrieval on the Web was created to bring together practitioners and
researchers concerned with search engine optimisation and related
topics.<sup class="reference" id="cite_ref-airweb_9-0">[9]</sup><br />
Companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.<sup class="reference" id="cite_ref-10">[10]</sup> Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban.<sup class="reference" id="cite_ref-wired09082005_11-0">[11]</sup> Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.<sup class="reference" id="cite_ref-12">[12]</sup><br />
Some search engines have also reached out to the SEO industry, and
are frequent sponsors and guests at SEO conferences, chats, and
seminars. Major search engines provide information and guidelines to
help with site optimization.<sup class="reference" id="cite_ref-g-wmguide_13-0">[13]</sup><sup class="reference" id="cite_ref-ms-wmguide_14-0">[14]</sup> Google has a Sitemaps
program to help webmasters learn if Google is having any problems
indexing their website and also provides data on Google traffic to the
website.<sup class="reference" id="cite_ref-googlesitemaps_15-0">[15]</sup> Bing Webmaster Tools
provides a way for webmasters to submit a sitemap and web feeds, allows
users to determine the crawl rate, and track the web pages index
status.<br />
<h3>
<span class="mw-headline" id="Relationship_with_Google">Relationship with Google</span></h3>
In 1998, Graduate students at Stanford University, Larry Page and Sergey Brin,
developed "Backrub," a search engine that relied on a mathematical
algorithm to rate the prominence of web pages. The number calculated by
the algorithm, <a href="https://www.blogger.com/null" title="PageRank">PageRank</a>, is a function of the quantity and strength of inbound links.<sup class="reference" id="cite_ref-lgscalehyptxt_16-0">[16]</sup>
PageRank estimates the likelihood that a given page will be reached by a
web user who randomly surfs the web, and follows links from one page to
another. In effect, this means that some links are stronger than
others, as a higher PageRank page is more likely to be reached by the
random surfer.<br />
Page and Brin founded Google in 1998.<sup class="reference" id="cite_ref-17">[17]</sup> Google attracted a loyal following among the growing number of Internet users, who liked its simple design.<sup class="reference" id="cite_ref-bbc-1_18-0">[18]</sup>
Off-page factors (such as PageRank and hyperlink analysis) were
considered as well as on-page factors (such as keyword frequency, <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Meta_tags" title="Meta tags">meta tags</a>,
headings, links and site structure) to enable Google to avoid the kind
of manipulation seen in search engines that only considered on-page
factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi
search engine, and these methods proved similarly applicable to gaming
PageRank. Many sites focused on exchanging, buying, and selling links,
often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.<sup class="reference" id="cite_ref-19">[19]</sup><br />
By 2004, search engines had incorporated a wide range of undisclosed
factors in their ranking algorithms to reduce the impact of link
manipulation. In June 2007, <i>The New York Times'</i> Saul Hansell stated Google ranks sites using more than 200 different signals.<sup class="reference" id="cite_ref-nyt0607_20-0">[20]</sup> The leading search engines, Google, Bing, and Yahoo,
do not disclose the algorithms they use to rank pages. Some SEO
practitioners have studied different approaches to search engine
optimization, and have shared their personal opinions.<sup class="reference" id="cite_ref-21">[21]</sup> Patents related to search engines can provide information to better understand search engines.<sup class="reference" id="cite_ref-22">[22]</sup><br />
In 2005, Google began personalizing search results for each user.
Depending on their history of previous searches, Google crafted results
for logged in users.<sup class="reference" id="cite_ref-23">[23]</sup> In 2008, Bruce Clay said that "ranking is dead" because of personalized search.
He opined that it would become meaningless to discuss how a website
ranked, because its rank would potentially be different for each user
and each search.<sup class="reference" id="cite_ref-24">[24]</sup><br />
In 2007, Google announced a campaign against paid links that transfer PageRank.<sup class="reference" id="cite_ref-25">[25]</sup> On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts,
a well-known software engineer at Google, announced that Google Bot
would no longer treat nofollowed links in the same way, in order to
prevent SEO service providers from using nofollow for PageRank
sculpting.<sup class="reference" id="cite_ref-26">[26]</sup>
As a result of this change the usage of nofollow leads to evaporation
of pagerank. In order to avoid the above, SEO engineers developed
alternative techniques that replace nofollowed tags with obfuscated Javascript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and Javascript.<sup class="reference" id="cite_ref-27">[27]</sup><br />
In December 2009, Google announced it would be using the web search
history of all its users in order to populate search results.<sup class="reference" id="cite_ref-28">[28]</sup><br />
On June 8, 2010 a new web indexing system called Google Caffeine
was announced. Designed to allow users to find news results, forum
posts and other content much sooner after publishing than before, Google
caffeine was a change to the way Google updated its index in order to
make things show up quicker on Google than before. According to Carrie
Grimes, the software engineer who announced Caffeine for Google,
"Caffeine provides 50 percent fresher results for web searches than our
last index..."<sup class="reference" id="cite_ref-29">[29]</sup><br />
Google Instant,
real-time-search, was introduced in late 2010 in an attempt to make
search results more timely and relevant. Historically site
administrators have spent months or even years optimizing a website to
increase search rankings. With the growth in popularity of social media
sites and blogs the leading engines made changes to their algorithms to
allow fresh content to rank quickly within the search results.<sup class="reference" id="cite_ref-30">[30]</sup><br />
In February 2011, Google announced the Panda
update, which penalizes websites containing content duplicated from
other websites and sources. Historically websites have copied content
from one another and benefited in search engine rankings by engaging in
this practice, however Google implemented a new system which punishes
sites whose content is not unique.<sup class="reference" id="cite_ref-31">[31]</sup> The 2012 Google Penguin attempted to penalize websites that used manipulative techniques to improve their rankings on the search engine,<sup class="reference" id="cite_ref-32">[32]</sup> and the 2013 Google Hummingbird
update featured an algorithm change designed to improve Google's
natural language processing and semantic understanding of web pages.<br />
<h2>
<span class="mw-headline" id="Methods">Methods</span></h2>
<div class="hatnote relarticle mainarticle">
Main article: <a class="new" href="https://en.wikipedia.org/w/index.php?title=Search_engine_optimization_methods&action=edit&redlink=1" title="Search engine optimization methods (page does not exist)">Search engine optimization methods</a></div>
<table class="metadata plainlinks ambox mbox-small-left ambox-move" role="presentation">
<tbody>
<tr>
<td class="mbox-image"><img alt="" data-file-height="20" data-file-width="50" height="20" src="https://upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Merge-arrow.svg/50px-Merge-arrow.svg.png" width="50" /></td>
<td class="mbox-text"><span class="mbox-text-span">It has been suggested that portions of this section be moved into <i><a class="new" href="https://en.wikipedia.org/w/index.php?title=Search_engine_optimization_methods&action=edit&redlink=1" title="Search engine optimization methods (page does not exist)">Search engine optimization methods</a></i>. (<a href="https://en.wikipedia.org/wiki/Talk:Search_engine_optimization" title="Talk:Search engine optimization">Discuss</a>)</span></td>
</tr>
</tbody></table>
<h3>
<span class="mw-headline" id="Getting_indexed">Getting indexed</span></h3>
<div class="thumb tright">
<div class="thumbinner" style="width: 352px;">
<a class="image" href="https://en.wikipedia.org/wiki/File:Websites_interlinking_to_illustrate_PageRank_percents.png"><img alt="" class="thumbimage" data-file-height="480" data-file-width="640" height="263" src="https://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/Websites_interlinking_to_illustrate_PageRank_percents.png/350px-Websites_interlinking_to_illustrate_PageRank_percents.png" width="350" /></a>
<br />
<div class="thumbcaption">
</div>
</div>
</div>
<div class="thumb tright">
<div class="thumbinner" style="width: 352px;">
<div class="thumbcaption">
Search engines use complex mathematical algorithms to guess which
websites a user seeks. In this diagram, if each bubble represents a web
site, programs sometimes called <i>spiders</i> examine which sites link
to which other sites, with arrows representing these links. Websites
getting more inbound links, or stronger links, are presumed to be more
important and what the user is searching for. In this example, since
website B is the recipient of numerous inbound links, it ranks more
highly in a web search. And the links "carry through," such that website
C, even though it only has one inbound link, has an inbound link from a
highly popular site (B) while site E does not. Note: percentages are
rounded.</div>
</div>
</div>
The leading search engines, such as Google, Bing and Yahoo!, use <a href="https://www.blogger.com/null" title="Web crawler">crawlers</a>
to find pages for their algorithmic search results. Pages that are
linked from other search engine indexed pages do not need to be
submitted because they are found automatically. Two major directories,
the Yahoo Directory and DMOZ both require manual submission and human editorial review.<sup class="reference" id="cite_ref-33">[33]</sup> Google offers <a class="mw-redirect" href="https://www.blogger.com/null" title="Google Webmaster Tools">Google Webmaster Tools</a>, for which an XML Sitemap
feed can be created and submitted for free to ensure that all pages are
found, especially pages that are not discoverable by automatically
following links.<sup class="reference" id="cite_ref-34">[34]</sup> Yahoo! formerly operated a paid submission service that guaranteed crawling for a cost per click;<sup class="reference" id="cite_ref-35">[35]</sup> this was discontinued in 2009.<sup class="reference" id="cite_ref-36">[36]</sup><br />
Search engine crawlers may look at a number of different factors when crawling
a site. Not every page is indexed by the search engines. Distance of
pages from the root directory of a site may also be a factor in whether
or not pages get crawled.<sup class="reference" id="cite_ref-cho_37-0">[37]</sup><br />
<h3>
<span class="mw-headline" id="Preventing_crawling">Preventing crawling</span></h3>
<div class="hatnote relarticle mainarticle">
Main article: Robots Exclusion Standard</div>
To avoid undesirable content in the search indexes, webmasters can
instruct spiders not to crawl certain files or directories through the
standard <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Robots.txt" title="Robots.txt">robots.txt</a>
file in the root directory of the domain. Additionally, a page can be
explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the <a href="https://en.wikipedia.org/wiki/Root_directory" title="Root directory">root directory</a>
is the first file crawled. The robots.txt file is then parsed, and will
instruct the robot as to which pages are not to be crawled. As a search
engine crawler may keep a cached copy of this file, it may on occasion
crawl pages a webmaster does not wish crawled. Pages typically prevented
from being crawled include login specific pages such as shopping carts
and user-specific content such as search results from internal searches.
In March 2007, Google warned webmasters that they should prevent
indexing of internal search results because those pages are considered
search spam.<sup class="reference" id="cite_ref-38">[38]</sup><br />
<h3>
<span class="mw-headline" id="Increasing_prominence">Increasing prominence</span></h3>
A variety of methods can increase the prominence of a webpage within the search results. <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Methods_of_website_linking" title="Methods of website linking">Cross linking</a> between pages of the same website to provide more links to important pages may improve its visibility.<sup class="reference" id="cite_ref-Shari_39-0">[39]</sup>
Writing content that includes frequently searched keyword phrase, so as
to be relevant to a wide variety of search queries will tend to
increase traffic.<sup class="reference" id="cite_ref-Shari_39-1">[39]</sup>
Updating content so as to keep search engines crawling back frequently
can give additional weight to a site. Adding relevant keywords to a web
page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element<sup class="reference" id="cite_ref-40">[40]</sup> or via 301 redirects can help make sure links to different versions of the url all count towards the page's link popularity score.<br />
<h3>
<span class="mw-headline" id="White_hat_versus_black_hat_techniques">White hat versus black hat techniques</span></h3>
SEO techniques can be classified into two broad categories:
techniques that search engines recommend as part of good design, and
those techniques of which search engines do not approve. The search
engines attempt to minimize the effect of the latter, among them <a href="https://en.wikipedia.org/wiki/Spamdexing" title="Spamdexing">spamdexing</a>. Industry commentators have classified these methods, and the practitioners who employ them, as either <a href="https://en.wikipedia.org/wiki/White_hat_%28computer_security%29" title="White hat (computer security)">white hat</a> SEO, or <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Black_hat_hacking" title="Black hat hacking">black hat</a> SEO.<sup class="reference" id="cite_ref-41">[41]</sup>
White hats tend to produce results that last a long time, whereas black
hats anticipate that their sites may eventually be banned either
temporarily or permanently once the search engines discover what they
are doing.<sup class="reference" id="cite_ref-42">[42]</sup><br />
An SEO technique is considered white hat if it conforms to the search
engines' guidelines and involves no deception. As the search engine
guidelines<sup class="reference" id="cite_ref-g-wmguide_13-1">[13]</sup><sup class="reference" id="cite_ref-ms-wmguide_14-1">[14]</sup><sup class="reference" id="cite_ref-43">[43]</sup>
are not written as a series of rules or commandments, this is an
important distinction to note. White hat SEO is not just about following
guidelines, but is about ensuring that the content a search engine
indexes and subsequently ranks is the same content a user will see.
White hat advice is generally summed up as creating content for users,
not for search engines, and then making that content easily accessible
to the spiders, rather than attempting to trick the algorithm from its
intended purpose. White hat SEO is in many ways similar to web
development that promotes accessibility,<sup class="reference" id="cite_ref-44">[44]</sup> although the two are not identical.<br />
Black hat SEO
attempts to improve rankings in ways that are disapproved of by the
search engines, or involve deception. One black hat technique uses text
that is hidden, either as text colored similar to the background, in an
invisible div,
or positioned off screen. Another method gives a different page
depending on whether the page is being requested by a human visitor or a
search engine, a technique known as cloaking.<br />
Another category sometimes used is grey hat SEO.
This is in between black hat and white hat approaches where the methods
employed avoid the site being penalised however do not act in producing
the best content for users, rather entirely focused on improving search
engine rankings.<br />
Search engines may penalize sites they discover using black hat
methods, either by reducing their rankings or eliminating their listings
from their databases altogether. Such penalties can be applied either
automatically by the search engines' algorithms, or by a manual site
review. One example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.<sup class="reference" id="cite_ref-intwebspam_45-0">[45]</sup> Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.<sup class="reference" id="cite_ref-46">[46]</sup><br />
<h2>
<span class="mw-headline" id="As_a_marketing_strategy">As a marketing strategy</span></h2>
SEO is not an appropriate strategy for every website, and other
Internet marketing strategies can be more effective like paid
advertising through pay per click (PPC) campaigns, depending on the site operator's goals.<sup class="reference" id="cite_ref-47">[47]</sup>
A successful Internet marketing campaign may also depend upon building
high quality web pages to engage and persuade, setting up analytics programs to enable site owners to measure results, and improving a site's conversion rate.<sup class="reference" id="cite_ref-48">[48]</sup><br />
SEO may generate an adequate return on investment.
However, search engines are not paid for organic search traffic, their
algorithms change, and there are no guarantees of continued referrals.
Due to this lack of guarantees and certainty, a business that relies
heavily on search engine traffic can suffer major losses if the search
engines stop sending visitors.<sup class="reference" id="cite_ref-49">[49]</sup>
Search engines can change their algorithms, impacting a website's
placement, possibly resulting in a serious loss of traffic. According to
Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm
changes – almost 1.5 per day.<sup class="reference" id="cite_ref-50">[50]</sup> It is considered wise business practice for website operators to liberate themselves from dependence on search engine traffic.<sup class="reference" id="cite_ref-51">[51]</sup><br />
<h2>
<span class="mw-headline" id="International_markets">International markets</span></h2>
Optimization techniques are highly tuned to the dominant search
engines in the target market. The search engines' market shares vary
from market to market, as does competition.</div>
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com1tag:blogger.com,1999:blog-5610522383932632419.post-24348820675487997572015-07-29T22:48:00.000-07:002015-07-29T23:33:13.309-07:00How to Troubleshoot Internet Connection Problems<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<img alt="hand-plugging-in-ethernet-cable" border="0" src="http://cdn5.howtogeek.com/wp-content/uploads/2012/10/650x300xhand-plugging-in-ethernet-cable.jpg.pagespeed.ic.Q4oaFSQkfK.jpg" height="300" style="background-image: none; border-width: 0px; display: inline; margin: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="hand-plugging-in-ethernet-cable" width="650" /><br />
<span style="font-size: small;">Internet connection problems can be frustrating. Rather than mashing
F5 and desperately trying to reload your favorite website when you
experience a problem, here are some ways you can troubleshoot the
problem and identify the cause.</span><br />
<span style="font-size: small;">
</span><span style="font-size: small;">Ensure you check the physical connections before getting too involved
with troubleshooting. Someone could have accidentally kicked the router
or modem’s power cable or pulled an Ethernet cable out of a socket,
causing the problem</span>.<br />
<br />
<h3>
<span style="font-size: large;">Ping</span></h3>
One of the first things to try when your connection doesn’t seem to
be working properly is the ping command. Open a Command Prompt window
from your Start menu and run a command like <b>ping google.com</b> or <b>ping howtogeek.com</b>.<br />
This command sends several packets to the address you specify. The
web server responds to each packet it receives. In the command below, we
can see that everything is working fine – there’s 0% packet loss and
the time each packet takes is fairly low.<br />
<img alt="image" border="0" src="http://cdn5.howtogeek.com/wp-content/uploads/2012/10/image112.png" height="390" style="background-image: none; border-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="image" width="643" /><br />
If you see packet loss (in other words, if the web server didn’t
respond to one or more of the packets you sent), this can indicate a
network problem. If the web server sometimes takes a much longer amount
of time to respond to some of your other packets, this can also indicate
a network problem. This problem can be with the website itself
(unlikely if the same problem occurs on multiple websites), with your
Internet service provider, or on your network (for example, a problem
with your router).<br />
Note that some websites never respond to pings. For example, ping microsoft.com will never results in any responses.<br />
<h3>
Problems With a Specific Website</h3>
If you’re experiencing issues accessing websites and ping seems to be
working properly, it’s possible that one (or more) websites are
experiencing problems on their end.<br />
To check whether a website is working properly, you can use Down For Everyone Or Just For Me,
a tool that tries to connect to websites and determine if they’re
actually down or not. If this tool says the website is down for
everyone, the problem is on the website’s end.<br />
<img alt="image" border="0" src="http://cdn5.howtogeek.com/wp-content/uploads/2012/10/image113.png" height="329" style="background-image: none; border-width: 0px; display: inline; margin: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="image" width="650" /><br />
If this tool says the website is down for just you, that could
indicate a number of things. It’s possible that there’s a problem
between your computer and the path it takes to get to that website’s
servers on the network. You can use the traceroute command (for example,
<b>tracert google.com</b>) to trace the route packets take to
get to the website’s address and see if there are any problems along
the way. However, if there are problems, you can’t do much more than
wait for them to be fixed.<br />
<h3>
<span style="font-size: large;">Modem & Router Issues</span></h3>
If you are experiencing problems with a variety of websites, they may
be caused by your modem or router. The modem is the device that
communicates with your Internet service provider, while the router
shares the connection among all the computers and other networked
devices in your household. In some cases, the modem and router may be
the same device.<br />
Take a look at the router. If green lights are flashing on it, that’s
normal and indicates network traffic. If you see a steady, blinking
orange light, that generally indicates the problem. The same applies for
the modem – a blinking orange light usually indicates a problem.<br />
<img alt="modem-lights" border="0" src="http://cdn5.howtogeek.com/wp-content/uploads/2012/10/modem-lights.jpg" height="512" style="background-image: none; border-width: 0px; display: inline; margin: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="modem-lights" width="650" /><br />
If the lights indicate that either devices are experiencing a
problem, try unplugging them and plugging them back in. This is just
like restarting your computer. You may also want to try this even if the
lights are blinking normally – we’ve experienced flaky routers that
occasionally needed to be reset, just like Windows computers. Bear in
mind that it may take your modem a few minutes to reconnect to your
Internet service provider.<br />
If you still experience problems, you may need to perform a factory
reset on your router or upgrade its firmware. To test whether the
problem is really with your router or not, you can plug your computer’s
Ethernet cable directly into your modem. If the connection now works
properly, it’s clear that the router is causing you problems.<br />
<small>Image Credit: <a href="http://www.flickr.com/photos/434pics/3502785071/" rel="nofollow">Bryan Brenneman on Flickr</a></small><br />
<h3>
<span style="font-size: large;">Issues With One Compute</span>r</h3>
If you’re only experiencing network problems on one computer on your
network, it’s likely that there’s a software problem with the computer.
The problem could be caused by a virus or some sort of malware or an
issue with a specific browser.<br />
Do an antivirus scan on the computer and try installing a different
browser and accessing that website in the other browser. There are lots
of other software problems that could be the cause, including a
misconfigured firewall.<br />
<h3>
<span style="font-size: large;">DNS Server Problems</span></h3>
When you try to access Google.com, your computer contacts its DNS
server and asks for Google.com’s IP address. The default DNS servers
your network uses are provided by your Internet service provider, and
they may sometimes experience problems.<br />
You can try accessing a website at its IP address directly, which
bypasses the DNS server. For example, plug this address into your web
browser’s address bar to visit Google directly:<br />
<blockquote>
<a href="http://74.125.224.72/">http://74.125.224.72</a></blockquote>
<img alt="image" border="0" src="http://cdn5.howtogeek.com/wp-content/uploads/2012/10/image114.png" height="500" style="background-image: none; border-width: 0px; display: inline; margin: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="image" width="650" /><br />
If the IP address method works but you still can’t access google.com,
it’s a problem with your DNS servers. Rather than wait for your
Internet service provider to fix the problem, you can try using a
third-party DNS server like <a href="http://www.howtogeek.com/79833/easily-add-opendns-to-your-router/">OpenDNS</a> or <a href="http://www.howtogeek.com/howto/7406/speed-up-your-web-browsing-with-google-public-dns/">Google Public DNS</a><br />
<div style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;">
</div>
</div>
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0tag:blogger.com,1999:blog-5610522383932632419.post-31336919242343262192015-07-15T00:14:00.003-07:002015-07-20T01:35:40.191-07:00DCN networking<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<b>What is DCN </b></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
A <b>dynamic circuit network</b> (<b>DCN</b>) is an advanced computer networking technology that combines traditional packet-switched communication based on the <a href="https://en.wikipedia.org/wiki/Internet_Protocol" title="Internet Protocol">Internet Protocol</a>, as used in the <a href="https://en.wikipedia.org/wiki/Internet" title="Internet">Internet</a>, with <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Circuit-switched" title="Circuit-switched">circuit-switched</a> technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjd2UnbPRUggTnJRSIjlpUKBYuAc-10dxr4lW3r5dZRSxZ2VfNlFNqXPwAp2DNr56LHb8QLNWXe5Sd7wixz5ipf7tVEn1KuIslvrp0lDoqTF4GXMlS1vwbhbgyvSTEKrn_B7m2g_vfa1Q/s1600/2+copy.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="524" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjd2UnbPRUggTnJRSIjlpUKBYuAc-10dxr4lW3r5dZRSxZ2VfNlFNqXPwAp2DNr56LHb8QLNWXe5Sd7wixz5ipf7tVEn1KuIslvrp0lDoqTF4GXMlS1vwbhbgyvSTEKrn_B7m2g_vfa1Q/s640/2+copy.JPG" width="640" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br /></div>
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com1tag:blogger.com,1999:blog-5610522383932632419.post-76661031690691065692015-05-13T23:41:00.001-07:002015-05-13T23:47:49.141-07:00DISADVANTAGES OF COMPUTER
Disadvantages of computer
The use of computer has also created some problems in society which are as follows.
Unemployment
Wastage of time and energy
Many people use computers without positive purpose. They play games and chat for a long period of time. It causes wastage of time and energy. Young generation is now spending more time on the social media websites like Facebook, Twitter etc or texting their friends all night through smartphones which is bad for both studies and their health. And it also has adverse effects on the social life
Data Security
The data stored on a computer can be accessed by unauthorized persons through networks. It has created serious problems for the data security.
Computer Crimes
People use the computer for negative activities. They hack the credit card numbers of the people and misuse them or they can steal important data from big organizations.
Privacy violation
The computers are used to store personal data of the people. The privacy of a person can be violated if the personal and confidential records are not protected properly.
Health risks
The improper and prolonged use of computer can results in injuries or disorders of hands, wrists, elbows, eyes, necks and back. The users can avoid health risks by using the computer in proper position. They must also take regular breaks while using the computer for longer period of time. It is recommended to take a couple of minutes break after 30 minutes of computer usage.Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0tag:blogger.com,1999:blog-5610522383932632419.post-56603193392404000412015-05-13T23:39:00.002-07:002015-05-13T23:46:44.479-07:00ADVANTAGES OF COMPUTER
Advantages of Computer
Computer has made a very vital impact on society. It has changed the way of life. The use of computer technology has affected every field of life. People are using computers to perform different tasks quickly and easily. The use of computers makes different task easier. It also saves time and effort and reduces the overall cost to complete a particular task.
Many organizations are using computers for keeping the records of their customers. Banks are using computers for maintaining accounts and managing financial transactions. The banks are also providing the facility of online banking. The customers can check their account balance from using the internet. They can also make financial transaction online. The transactions are handled easily and quickly with computerized systems.
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0tag:blogger.com,1999:blog-5610522383932632419.post-45889496480917682862015-05-11T02:07:00.002-07:002015-05-11T02:07:57.167-07:00CCNP Master<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKkOcjt5RbpqzPmXZp8H0ZGMaETpgzBUX-6RtaRBZK0eBWCg78fPG7T0vcO0xyYP-U92u8Fl_2Q2RKpDzMaDQ3ACJeVn-AoYDUCGE4iH1tPuIr_LyS5q3ckgKR9YdthBcxbWoIPN3WDW0/s1600/how-to-master-ccna-ccnp-4-pack-3d-book.png" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKkOcjt5RbpqzPmXZp8H0ZGMaETpgzBUX-6RtaRBZK0eBWCg78fPG7T0vcO0xyYP-U92u8Fl_2Q2RKpDzMaDQ3ACJeVn-AoYDUCGE4iH1tPuIr_LyS5q3ckgKR9YdthBcxbWoIPN3WDW0/s320/how-to-master-ccna-ccnp-4-pack-3d-book.png" /></a>
network engineers who aspire to plan, implement, verify and troubleshoot local and wide-area enterprise networks, the Cisco CCNP Routing and Switching certification program provides the education and training required to develop hands-on skills and best-practices.Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0tag:blogger.com,1999:blog-5610522383932632419.post-89083575081264401142015-05-01T00:30:00.000-07:002015-05-01T00:38:47.553-07:00CCNA <div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc8mvzDGIv_HMWxEGfr9cavS_vlH4jM_fPPLDfBOsy6b5m4eYhrgL5LceV-xlskwuR4yTFjd5mlqxIOCCtsfMisW8H6Lv4in7w1Ha2hsg6a3EQSwhPuuHUruRXxtqC8r9Ev3pM72Wa_A/s1600/2000px-Cisco_logo.svg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc8mvzDGIv_HMWxEGfr9cavS_vlH4jM_fPPLDfBOsy6b5m4eYhrgL5LceV-xlskwuR4yTFjd5mlqxIOCCtsfMisW8H6Lv4in7w1Ha2hsg6a3EQSwhPuuHUruRXxtqC8r9Ev3pM72Wa_A/s1600/2000px-Cisco_logo.svg.png" height="180" width="320" /></a></div>
<br />
Cisco Certified Network Associate (CCNA) Routing and Switching is a certification program for entry-level network engineers that helps maximize your investment in foundational networking knowledge and increase the value of your employer's network. CCNA Routing and Switching is for Network Specialists, Network Administrators, and Network Support Engineers with 1-3 years of experience. The CCNA Routing and Switching validates the ability to install, configure, operate, and troubleshoot medium-size routed and switched networks.</div>
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0tag:blogger.com,1999:blog-5610522383932632419.post-83602005708808115052015-04-27T22:47:00.000-07:002015-05-01T00:27:12.993-07:00IMPORTANCE COMPUTER TRAINING<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkcmR0_mUJCmGq1Z8fdVWu1I-Qy1UF7EgEA61RYRmse-TgYtQv8ujICiUar3vaXwsdQWebSsmylECdbbHNYR6CDzubijXJf2MB1Uhd1s0CWu17e1Lty7_cDuNRrhmkHPny9Ob3oGSuiQ/s1600/Desktop_computer-4.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkcmR0_mUJCmGq1Z8fdVWu1I-Qy1UF7EgEA61RYRmse-TgYtQv8ujICiUar3vaXwsdQWebSsmylECdbbHNYR6CDzubijXJf2MB1Uhd1s0CWu17e1Lty7_cDuNRrhmkHPny9Ob3oGSuiQ/s1600/Desktop_computer-4.jpg" height="212" width="320" /></a></div>
<br />
<br />
<br />
Computer training is an important factor in 21st century workplaces. The
importance of computer training can be viewed in two ways. First, it is
vital for job applicants to obtain computer training to make themselves
more valuable to potential employers and to obtain higher-paying jobs.
Second, it is important for companies to utilize computer training in
their new-hire training programs and employee development initiatives.</div>
Anonymoushttp://www.blogger.com/profile/04961055714539317187noreply@blogger.com0