viernes, 20 de julio de 2007
Extreme Networks y Nortel hacen equipo para Redes Carrier Ethernet
Con una demostración en vivo, las compañías permitieron ver las conexiones ethernet sobre una troncal PBT entre el stand de Extreme Networks y el de Nortel uno en cada extremo del recinto.
Extreme Networks y Nortel realizaron una demostración de interoperatibilidad de Provider Backbone Transport (PBT) optimizadas con el nuevo switch para Carrier Ethernet de Extreme Networks BlackDiamond 12802R.
“Estamos muy contentos acerca de este avance y los resultados obtenidos con nuestra nueva familia de productos BlackDiamond 12800", comentó Peter Lunk, director de marketing para proveedores de servicio en Extreme Networks. “PBT es el siguiente paso en la adopción del Ethernet como el transporte del protocolo preferido de las redes carrier. Este provee determinismo y confiabilidad a los proveedores de servicio que han estado dudosos en implementar redes Ethernet Transport en el pasado".
src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
El spam oculto en documentos PDF gana terreno
Luego que las compañías de seguridad informática finalmente lograran controlar el spam en imágenes, una nueva modalidad de distribución comienza a ganar fuerza: el spam en documentos PDF.
Aunque el volumen de spam camuflado en imágenes ha disminuido considerablemente en los últimos meses, los usuarios de Internet no pueden sentirse aliviados. En efecto, los spammers constantemente conciben nuevos trucos para eludir los filtros diseñados para bloquear el correo basura. En un informe, Symantec advierte que el denominado spam en PDF se está convirtiendo en un problema cada vez mayor. El mensaje publicitario es incorporado en un archivo PDF y el e-mail tiene generalmente un tono más profesional que el spam típico.
“Los spammers saben que cada vez es más difícil eludir los filtros antispam, por lo tanto les urge que la gente abra el correo y con ello aumenten el denominado ‘hit rate´. Los usuarios pueden ser inducidos a abrir un mail si consideran que el archivo PDF tiene un tono profesional", escribe Symantec en un comunicado.
Cabe señalar que cuando los spammers comenzaron a distribuir spam oculto en imágenes, las compañías de seguridad informática fueron tomadas por sorpresa y demorarían alrededor de 6 meses en desarrollar filtros efectivos. El spam en imágenes ya no es un problema masivo. A comienzos de año, representaba el 50% del spam, mientras que actualmente se sitúa en un 14%. El desafío para Symantec y las demás compañías del rubro ahora es el spam en PDF.
miércoles, 18 de julio de 2007
Ten Tips for Avoiding Google Hell
In a nutshell, SEO is the practice of optimizing a website's pages to ensure that they are search-engine friendly and that they target their intended key audiences. Each of the major search engines (Google, Yahoo!, MSN, and Ask) sends out "spiders" or "crawlers." These are programs that come to your website and "collect" all the information about each web page and take it back to the search engine's database. The search engine houses the information about the pages it collected and then presents it when a query from a searcher matches what your page is about. Each search engine applies its own algorithm to determine a page's relevancy to the searcher's query, and returns a set of results.
Here are some very basic tips that webmasters, or online marketers responsible for promotion of a website, should keep in mind. These fundamental strategies can help overcome many issues that webmasters and marketers encounter when dealing with search engines — issues that can inhibit a site from ranking, and prevent pages from being crawled or even push them into the supplemental index.
1. Use the robots.txt File
The robots.txt file is probably one of the easiest tactics webmasters can employ. Using it can help ensure that the right folders are being found and crawled by the search engines. Use of robots.txt can also assist webmasters in keeping the search engine spiders out of places they shouldn't be in. This simple text file is placed in the root directory of your website and basically acts as a "roadmap" for the search engine spiders to follow when accessing your site. You can use specific commands to allow or block folders from being crawled by all search engines spiders or a select few.
To create a robots.txt file, just use your notepad program, and follow the standard specifications for setting up instructions for the search engine spiders to follow. To get help understanding how to set up a robots.txt file, take a look at Robotstxt.org, a great resource for listing out specific robots and the syntax needed for excluding robots and folders. Referring to the specifications at Robotstxt.org will also help you avoid the pitfalls of possibly blocking folders or allowing the wrong folders to be indexed by the search engines.
src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
2. Validate Your Website
Two of the four major search engines, Google and Yahoo!, allow and encourage webmasters to validate their websites. This is a simple task that involves uploading files to the root of your website directory, or placing information within the website's meta tags.
When a website is validated, it helps to build a level of trust with the search engine. Validation also allows the search engine to know that this indeed is a current, truly valid and functioning website.
Yahoo! allows you to sign up for their SiteExplorer service (login required) and monitor your site's authentication. Blog feeds can also be submitted if the website has a blog. And Yahoo! will notify the email connected with the account when any actions are taken on the authenticated websites connected to the account.
Google's Webmaster Central is a great resource that allows webmasters to validate their websites, and offers a lot of other tools for monitoring a website's indexing with Google. Signing up is free. Once a website is validated through the Webmaster Central tool, a webmaster can monitor and control how often a Google search engine spider comes to "crawl" the website. Webmasters can also monitor links coming into the site, high-ranking keywords for the site, and most-clicked-on query results.
Both Yahoo! Site Explorer and Google's Webmaster Central allow webmasters to submit sitemaps — the subject of our next tip.
3. Use Sitemaps
Sitemaps work similarly to the robots.txt file, in that they act as a roadmap for search engines' spiders. A sitemap tells the search engines which pages of the website should be crawled. If the site is small, you can put all your page URLs into the sitemap file; however, you'll need a different approach if your site is a couple thousand pages or more.
For larger sites, you'll want to give the spiders a "guide" to follow, so make sure your higher-level category and subcategory pages are listed in the sitemap file. By including these higher-level pages in the sitemap, you let the spiders follow the links that will lead to deeper pages within your site.
A protocol for sitemaps was set up through a cooperative effort from Google, Yahoo!, and MSN. The specifications for setting up a sitemap file either through a .txt file or an .xml file can be found at the Sitemaps.org website. Following these specifications will help webmasters to ensure they are avoiding some common pitfalls. Currently, both Google and Yahoo! allow you to submit your sitemap file.
4. Acquire Relevant Links
One of the biggest factors in having a website rank for keywords or phrases is more than simply the number of links acquired coming into the website — how those links are actually worded matters immensely.
Search engines consider a link to a website as a "vote of confidence." However, the search engine really takes into account just how the link is formed. For example, links to your site that are worded as just "click here" won't be as powerful as a link worded "Buy blue widgets at Blue Widgets' Website." The value of the link is greatly enhanced by using keywords within the link, rather than just "click here."
If you have the ability to suggest or influence the formation of a link that points to your site, make sure to use the keywords you are hoping to rank for. Not only does it benefit your website; it can also help the site that is linking to you.
Webmasters and search marketers: Take heed of where you are acquiring links from. Links to your website from bad neighborhoods can negatively affect your website's relevancy to the search engine. ("Bad neighborhoods" are defined on the Webmaster Guidelines page in the foregoing link.) If the linking site seems shady or spammy, or if something just "doesn't seem right," it might be better to trust your instincts until you can thoroughly check out the site before acquiring the link from it.
5. Don't Sell Links
Selling links on a website is a fairly controversial subject. On one hand, webmasters should be allowed to do what they want with their websites. On the other hand, the search engines view this tactic as high-risk behavior if the links are not denoted as "paid" in some way. Then there is the whole issue of just what is considered a payment.
If you are selling links on your page, the search engines suggest that you tag the link with the rel=nofollow attribute. This basically tells the search engines that you are not "passing confidence" to the site you are linking to. But this action essentially defeats the whole purpose of selling links on a website, and therein lies the controversy.
Google's quality guidelines state, "Don't participate in link schemes designed to increase your site's ranking or PageRank." If your website sells links for this purpose, and it is discovered by the search engines, your website may receive some sort of penalty that will affect the site's ranking. And in Google, this could possibly affect the site's PageRank.
6. Eliminate Duplicate Content
Search engines want to display the most relevant web pages in response to a searcher's query. Therefore, when pages have identical or very similar content, some of those pages will likely be disregarded by the search engine. Some websites don't even realize they're creating duplicate pages because of the use of session IDs, or multiple variables in a URL taking site visitors to the same page. These are just a few ways duplicate content can be created and cause a substantial headache for marketers and webmasters alike.
Google states in its webmaster guidelines:
"Allow search bots to crawl your sites without session IDs or arguments that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in incomplete indexing of your site, as bots may not be able to eliminate URLs that look different but actually point to the same page."
Be aware of the following other ways of accidentally creating duplicate content:
- "Printer-friendly" pages
- Manufacturers' product descriptions that are reused by other retailers
- Mirrored websites
- Syndicated articles
All of these situations can get a web page ranked lower in the search engine, or possibly get it pushed into Google's supplemental index — neither of which you want to happen for any website. There are some simple ways to avoid this, such as blocking folders from the search engine spiders with the Robots.txt file (see tip #1).
7. Use Correct Redirects
Redirects are generally used by webmasters to take visitors who land on old, out-of-date web pages (usually from prior bookmarks) to the pages that are most current within the website's new structure. There are two types of redirects: a Temporary Redirect, technically known as a 302, and a Permanent Redirect, technically known as a 301.
Using the wrong kind of redirect when a web page is being moved can cause the page to lose its rank within the search engine. If a web page is being moved permanently to a new URL, the Permanent Redirect/301 should be used.
A Permanent Redirect tells the search engine to pass all the old page's "juice" onto the new page. This means all the value acquired by links that still point to the old page will be passed onto the new (redirected) page. If a Temporary Redirect/302 is used, no value is passed onto the new page.
8. Simplify URL Parameters
Webmasters of sites that dynamically generate content based on variables in a URL should be cognizant that search engines might not be capable of crawling some web pages. The spiders/crawlers sent out by the search engines are very "childlike" in nature. Should they run into any "roadblocks," they will stop their crawl.
Having too many parameters in your URL is one of the roadblocks that can stop crawlers from going deeper. In general terms, most search engines have a difficult time crawling URLs with more than four parameters.
If having too many parameters is an issue with your website, it might be wise to discuss with your tech team some alternatives for passing the variables to the server to generate the dynamic pages. Tools such as ISAPI_Rewrite can be a powerful assist in efforts to resolve issues with variables.
9. Customize Title Tags and Meta Descriptions
One of the most important pieces in optimizing a web page is the page title tag: It tells search engines what the page is all about. A lot of relevancy is placed in a page's title tag, so careful crafting of that tag will not only reflect a company's position but also capitalize on the use of keywords and phrases.
Unless your company is a household name like Coca-Cola or Toyota or Ritz® crackers, it's unlikely that people will be searching on your company's name to find the services or products on your website. Keeping this in mind, it's clear that you should avoid title tags like "XYZ Company Services" or "About Us — XYZ Company."
Along with the title tag, the page's meta description is an important piece of the marketing puzzle. The meta description has little value in determining relevancy to a search engine, but it is displayed when a search engine displays its results page, right below the page's title tag. Customizing both the page's title tag and meta description to focus on the actual content of the page goes a long way toward getting searches to click on your listing in the search results.
10. Request Reinclusion (Google)
Let's say you've discovered that your website has been removed from Google's index, or that a penalty has been imposed on your website's pages. What's a webmaster or search marketer to do? Are you banned or penalized forever?
Thankfully, Google can be a forgiving search engine. If you address and fix the issues that Google has identified as the reason for the banning or penalty, you then have recourse: the Reinclusion Request. Google offers you the ability to submit this request to have your website reevaluated to verify whether the offending practices Google found have been "cleaned up."
The one thing the reinclusion request won't do is guarantee that Google will include your website in the top 10 when a search is done on keywords or phrases.
Get More Help
There are a few other issues that can give webmasters and search marketers some hassles when it comes to getting their websites to rank in the search engines. In general, though, we've covered some common problems that can be resolved rather easily. If your site is experiencing any search ranking issues that are particularly difficult to work out, don't be afraid to join the webmaster users groups on both Google and Yahoo! — both groups are friendly and helpful. You'll also find that employees from these search engines will lend a helping hand when they can.
Google and Yahoo! are currently the only two engines that have dedicated actual resources and created programs to communicate with webmasters and site owners. Both MSN and Ask are currently in the process of developing similar tools. MSN does have its own set of tools called AdLabs, but these tools are highly geared toward pay-per-click advertising.
So if you ever find yourself in "search engine hell," stop and take a look at the ten situations listed in this article and compare them to what's going on with your website. Hopefully there's a nugget or two of information here that will help you climb out of trouble and into higher rankings!
Thanks to informit team for this beautiful article .
I hope that you enjoy this article and make helpful with this.
Bad User Interface of the Week: File It Under “Bad”
In the UNIX world, a file is an untyped, unstructured stream of bytes. This is a fairly useful from a programmer’s perspective; it’s a lowest-common-denominator that can be used to implement more useful abstractions, but what does it mean from a user perspective?
I tried an experiment. Over the last few months, I have asked a small number of people to tell me what a file is. Some have been computer scientists, some in business, and some fairly naive computer users. Not a single one could tell me what a file was.
As with many user interface concepts, a file is based on a physical metaphor: a thin folder for storing documents. Interestingly, if I asked people to tell me what a document was, I got sensible answers. Terminology can be very important when describing parts of the user interface.
Once a user understands what a file is, there are still some hurdles to overcome. One of the biggest is the concept of "saving." In general, saving means "write the current document state to disk." Somewhat counter-intuitively, saving is typically a destructive operation; when you save, you are usually overwriting the previous version.
How should saving work? Most programs maintain an undo history, but very few save this with the document. Those that do often present a security problem; if you then share the file with someone else, then that person can see the revision history of the document. It turns out that "saving" is actually used for two purposes:
- Checkpointing
- Publishing
These are conceptually quite different. Checkpointing is a way of telling the computer that you might want to go back to that particular state. Some filesystems, such as that of VMS, incorporate this automatically. Every time you "save" a file, you are performing a non-destructive operation, creating a new version of it on disk. ZFS can do this very well, allowing each version of a file to be checkpointed and only using disk storage for the changes. LFS also permits this kind of interaction. Most conventional UNIX and Windows filesystems, however, do not.
Publishing is quite different. Publishing is the act of sending a document to a remote person. This is often the virtual equivalent of selecting Print, and some systems integrate the UIs. On recent versions of OS X, for example, the Print dialog has a button for sending the PDF through an Automator workflow, which can be used for emailing the PDF version to the recipient.
Combining two very separate user operations into the same user interface because the implementations are similar is a very common mistake. This is something that files themselves often exhibit. When I write a long document, I have a text file containing the typesetting commands, and text and a PDF file containing the typeset output. The same is true when writing software; I have source files and an executable file.
This almost certainly sounds normal to most readers, because this is the way most systems work. But why is it this way? A source code listing and an executable are, in an abstract sense, simply different views on the same data. One can easily be produced from the other (and in some cases this process is even reversible). Many languages do not make this distinction at all; the compiler generates the executable code when the program runs, and if it caches the result, it does so in a manner not visible to the user.
One of the first systems to work this way was SmallTalk, which dispensed with files altogether. The environment contained a class browser, and this allowed classes and methods to be edited without any thought as to how they were stored on disk, nor of whether they needed to be recompiled.
When designing a user interface, always try to remember how your users will be interacting with it. What is actually important from their perspective? No one ever needs to edit files; they need to edit data. Files are one abstraction of data, but they may not be the best one for the task at hand. Music jukebox and photo album software have shown how much better a specialized viewer can be for certain tasks. A user of something like iTunes is aware of tracks, with track and album names, artists, and other metadata, not of a hierarchy of files in folders on a disk. Picking the correct abstractions for user interfaces is as important as picking them for systems.
martes, 17 de julio de 2007
Samsung lanza Q1 Ultra
La solución ultraligera “todo en uno" de Samsung dispone de un teclado QWERTY, videocámara, navegación GPS y un 30% más de potencia.
Samsung Electronics ha presentado el Samsung Q1 Ultra, la próxima generación del UMPC Q1 lanzado originalmente en Marzo 2006.
Samsung ha vuelto a colaborar con Intel y Microsoft para crear un UMPC completo. Centrándose en la movilidad y funcionalidad, Q1 Ultra es uno de los ordenadores portátiles más ligeros disponibles actualmente en el mercado. Con un peso de 690 gr. y menos de 23.9 mm de ancho ofrece la misma calidad en comunicaciones, entretenimiento y funcionalidad que un PC convencional.
Q1 Ultra dispone de una batería con autonomía de hasta 3.5 horas con la batería estándar de 4 celdas o alrededor de 7 horas con la batería de larga duración, con más de un 30% de mejora en la autonomía. Incluye 10/100 Ethernet, 802.11b/g gíreles LAN, HSDPA/WiBro y Bluetooth 2.0.
Kim Houn Soon, Vicepresidente Ejecutivo de Samsung Computer Systems comenta: “El año pasado, por primera vez, Samsung introdujo el concepto de UMPC en el mercado de consumo con una gran anticipación. Este año, lanzamos el Q1 Ultra basándonos en el feedback de los propios usuarios. Para el 2010, esperamos que el mercado de los UMPC alcance las 10 millones de unidades, con el Q1 Ultra liderando el mercado".
Oracle presenta Oracle Database 11g
Con más de 400 funcionalidades, 15 millones de horas de prueba y 36,000 meses/persona de desarrollo, Oracle Database 11g es el producto de software más innovador y de más alta calidad que Oracle haya anunciado.
Oracle presentó en la Ciudad de Nueva York, Oracle Database 11g, la última versión de la base de datos de la compañía.
“Oracle Database 11g, resultado de los 30 años de experiencia en diseño de bases de datos, ofrece gestión de información empresarial, de siguiente generación", dijo Andy Mendelsohn, vicepresidente senior de Tecnología de Servidores de Base de Datos de Oracle.
“Más que nunca, nuestros clientes enfrentan distintos desafíos con respecto a los datos (data): un rápido crecimiento, mayor integración y presiones relativas a los costos de TI de conectividad de los mismos. Oracle Database 10g utilizó por primera vez grid computing, y más de la mitad de los clientes de Oracle han adoptado esa versión. Oracle Database 11g ofrece ahora las características claves que nuestros clientes han demandado para acelerar la adopción y el crecimiento masivos de los grids de Oracle; lo cual representa una innovación real, que aborda desafíos reales, según la opinión de clientes reales", agregó el alto ejecutivo.
lunes, 16 de julio de 2007
Procesadores VIA apoyan educación auxiliada por cómputo
Los procesadores de escritorio VIA C7-D impulsan nuevas computadoras accesibles en el uso de energía, dirigidas a mercados emergentes como el educacional de África del Sur.
VIA Technologies y Mecer anunciaron la nueva PC Mecer Education impulsada por el procesador de escritorio VIA C7-D de 1.5GHz y compatible con una variedad de sistemas operativos, incluyendo Microsoft Windows XP Starter Edition.
Brian Sam, director de ventas de VIA Technologies para América Latina, hizo hincapié en que esta plataforma es la misma que se está comercializando en América Latina en las PCs ensambladas por los integradores.
Sam añadió que para la distribución de estas plataformas, VIA ya ha establecido alianzas estratégicas con sus socios del canal en toda la región de América Latina. “Nuestro objetivo es entregar soluciones con la plataforma en que se basa la iniciativa VIA pc-1. De esta manera están al alcance de los usuarios de esta región diversas opciones de motherboards equipadas con el procesador C7-D", afirmó el directivo.