Mostrando entradas con la etiqueta ENGLISH. Mostrar todas las entradas
Mostrando entradas con la etiqueta ENGLISH. Mostrar todas las entradas

miércoles, 18 de julio de 2007

Bad User Interface of the Week: File It Under “Bad”

he UNIX philosophy says "everything is a file," which sounds sensible on the surface. It is somewhat useful to have a unified interface, but what exactly is a file?

In the UNIX world, a file is an untyped, unstructured stream of bytes. This is a fairly useful from a programmer’s perspective; it’s a lowest-common-denominator that can be used to implement more useful abstractions, but what does it mean from a user perspective?

I tried an experiment. Over the last few months, I have asked a small number of people to tell me what a file is. Some have been computer scientists, some in business, and some fairly naive computer users. Not a single one could tell me what a file was.

As with many user interface concepts, a file is based on a physical metaphor: a thin folder for storing documents. Interestingly, if I asked people to tell me what a document was, I got sensible answers. Terminology can be very important when describing parts of the user interface.

Once a user understands what a file is, there are still some hurdles to overcome. One of the biggest is the concept of "saving." In general, saving means "write the current document state to disk." Somewhat counter-intuitively, saving is typically a destructive operation; when you save, you are usually overwriting the previous version.

How should saving work? Most programs maintain an undo history, but very few save this with the document. Those that do often present a security problem; if you then share the file with someone else, then that person can see the revision history of the document. It turns out that "saving" is actually used for two purposes:

  • Checkpointing
  • Publishing

These are conceptually quite different. Checkpointing is a way of telling the computer that you might want to go back to that particular state. Some filesystems, such as that of VMS, incorporate this automatically. Every time you "save" a file, you are performing a non-destructive operation, creating a new version of it on disk. ZFS can do this very well, allowing each version of a file to be checkpointed and only using disk storage for the changes. LFS also permits this kind of interaction. Most conventional UNIX and Windows filesystems, however, do not.

Publishing is quite different. Publishing is the act of sending a document to a remote person. This is often the virtual equivalent of selecting Print, and some systems integrate the UIs. On recent versions of OS X, for example, the Print dialog has a button for sending the PDF through an Automator workflow, which can be used for emailing the PDF version to the recipient.

Combining two very separate user operations into the same user interface because the implementations are similar is a very common mistake. This is something that files themselves often exhibit. When I write a long document, I have a text file containing the typesetting commands, and text and a PDF file containing the typeset output. The same is true when writing software; I have source files and an executable file.

This almost certainly sounds normal to most readers, because this is the way most systems work. But why is it this way? A source code listing and an executable are, in an abstract sense, simply different views on the same data. One can easily be produced from the other (and in some cases this process is even reversible). Many languages do not make this distinction at all; the compiler generates the executable code when the program runs, and if it caches the result, it does so in a manner not visible to the user.

One of the first systems to work this way was SmallTalk, which dispensed with files altogether. The environment contained a class browser, and this allowed classes and methods to be edited without any thought as to how they were stored on disk, nor of whether they needed to be recompiled.

When designing a user interface, always try to remember how your users will be interacting with it. What is actually important from their perspective? No one ever needs to edit files; they need to edit data. Files are one abstraction of data, but they may not be the best one for the task at hand. Music jukebox and photo album software have shown how much better a specialized viewer can be for certain tasks. A user of something like iTunes is aware of tracks, with track and album names, artists, and other metadata, not of a hierarchy of files in folders on a disk. Picking the correct abstractions for user interfaces is as important as picking them for systems.

lunes, 16 de julio de 2007

The Problem with DVDs

I own quite a few DVDs. I also own quite a few CDs. Both look quite similar, but when you put them in the player, you get very different behaviors. When you put a CD into a CD player, it plays music. This seems fairly reasonable, because there isn’t much else you can do with a CD unless you rip it to a computer’s hard disk (in which case, you are no longer dealing with the CD directly).

When you put the DVD into a DVD player, the standard behavior is less well-defined. Some of the older DVDs I own—from the era where things like subtitles and surround sound were listed as "special features"—play the film. Some slightly newer ones play some kind of flashy introduction (which looked dated within a year of release) and then go to the menu. The newest ones tell me that copyright infringement is illegal, suggest a load of other films I might want to buy, and then go to the menu.

The first thing to notice about this insertion behavior is the lack of consistency. This is more the fault of the DVD standard than the individual discs. No single behavior was mandated, so individual studios picked their own favorite. A user who puts a new DVD in the player has no way of knowing what to expect when they press Play.

The second thing to notice is that, as studios became more familiar with the capabilities of the format, the user experience deteriorated. Think for a second about what the user wants to do when he inserts a DVD into a player. Most of the time, he wants to watch the film. With old VHS cassettes, this was about the only thing you could do, so it was the default behavior. With early DVDs, the same was true. At some point along the line, the behavior changed to going to the menu.

Why is going to the menu such a bad initial behavior? For two reasons:

  • Any action requires a button to press. If all you wanted to do was watch the movie, then you are still required to press the "yes, I actually did put this film in my player in order to watch the film" button.
  • Every single DVD remote has a Menu button. You can always get to the menu in a single button press. Because you start on the menu, however, the selected option has to be "Play Film," because this is what the user almost always wants to do. If the movie started playing automatically, then the default option in the menu could be to play special features.

By making this small change, we turn playing the film from a one-button action into a zero-button action, but we don’t make accessing the special features any harder. Now, getting the special features requires you to press Menu, then Enter, where previously it required you to press Down (or Across, depending on the menu layout) and then Enter. Out most common action is easier, and our alternative action is less hard.

I am going to ignore sound and language selection options in the menu; they shouldn’t even be there because the DVD specification requires audio and subtitle language to be configurable in the player. The disc should just read them from the player, not require the user to make the same choice every time.


Applications Have Startup Problems, Too

This kind of behavior is directly mirrored in the software world. What happens when you launch an application? Taking document-based applications as a particular subset, the most common thing you are going to want to do is create a new document, and some default to this. An example of one that doesn’t is Apple’s Keynote.

When you start Keynote, you are given a dialog box asking you to select a theme. It is not obvious why this is here; Keynote stores each slide’s structure, so it is easy to change the theme after creating the presentation. Why not just default to the last, or most commonly used, theme? After you select one, there is even a drop-down list in the toolbar allowing you to change it.

Another thing to notice about this dialogue is that it gives you the option of opening an existing file. Why is this here? It is no easier to click on it that to go to the Open option in the File menu. In fact it’s harder, because the File menu is always in the same place on the screen while the Keynote window moves around, and there is always the Command-Option shortcut for the menu. If it’s a recently modified presentation, it will be in the File, Open Recent submenu, which is even easier than going through this menu.

I don’t want to single Keynote out particularly. A lot of applications are guilty of this behavior. When you design an application, always try to think of what the user will want to do most of the time, and default to this. Once you’ve got that working, ensure that it’s easy to do non-standard things. If you’re doing public beta releases, you might consider having your beta record what the most common actions are, so you are sure you aren’t a special case.

Using Wireless Technology to Augment Network Availability and Disaster Recovery

A few months ago I wrote about not letting the phone company’s disasters become yours and about ways to protect against the all-too-prevalent cable cut. Many wireless technologies offer a solution to this problem since it is exceedingly difficult to dig up air the way one can dig up a cable. Wireless technology not only increases network availability but it can also help you recover in a disaster.

This month, I provide a few "tricks of the trade" to bolster both benefits to your organization.

Wireless technologies that are useful to the enterprise user for network diversity and disaster recovery include the following:

  • Infrared
  • Microwave
  • Satellite
  • Unlicensed point-to-multipoint systems

Each has its own inherent strengths and weaknesses and the application for the technology you choose to back up (voice, bursty data, Internet, and so on) will also play a role in which wireless technology is best.

Following is a brief summary of some common wireless alternatives.

Infrared (Point-to-Point) Links

Point-to-point infrared links are not radio: they are invisible light. You can think of them like the infrared remote for your TV. Infrared is inexpensive, it does not need to be licensed, and infrared equipment comes with a variety of interfaces including T1 and Ethernet. All pretty good advantages to start.

Infrared requires line of sight; that is, one end must be physically visible to the other end of the link. And because it is actually light and not radio, Infrared is much more easily affected by fog, rain, snow, birds, and (practically speaking) anything that will interfere with prorogation of light.

Despite its limitations, infrared is widely used. The equipment can easily be mounted in a building, it does not require any special power or "environment," and its transmitter/receiver can operate through window glass with few problems. If you have an application that requires you to get a T1 across the street or across a small campus, infrared may be your least expensive solution.

Infrared links are also often used for LAN interconnection in buildings that are in close proximity but separated by public rights of way (such as streets) where cabling between buildings is impractical. If you consider the use of an infrared link yourself, be sure you don’t exceed a mile or so (less if you are prone to periodic fog or heavy rain), and that you have clear line of sight.


Microwave Radio

Microwave has broad applicability, high reliability and availability, relatively good ease of use, and relatively low cost. You do need a license to operate most microwave systems, and the more popular frequencies are congested and difficult to get licensed, especially in the major cities.

Like infrared, microwave requires line of sight, which is problematic within major cities. If you are lucky enough to get a frequency licensed, all your work could be undone because someone builds a building between the two points on your microwave link. This occurs more often than you might think.

Microwave enjoyed most of its popularity in the 1980s as a "bypass" alternative to go around the local telephone companies when long distance got cheap but the phone links to connect to the long distance providers got expensive. The logical response of enterprise users was to dump the local telephone company and use microwave to connect directly with long distance carriers of the time such as MCI and Sprint.

While the financial motive was the primary driver, it took the enterprise user only until the next cable cut to realize that microwave also had use as a disaster recovery technology. Microwave provided the ultimate diverse route because one cannot dig up air.

In the 20 years since its use as a "bypass" technology, the feature richness and reliability of microwave has increased, and the cost of these systems has dropped significantly. For example, on many systems there is a greater choice of interfaces with Ethernet and T1 interfaces being commonplace.

But as stated earlier, you have to secure an FCC license to operate a microwave system. The manufacturer can help you do this, and there are also numerous consultants who can literally be found in the Yellow Pages to help with the same issue. If you are looking for true diversity at reasonable cost and at higher reliability than infrared, microwave may be the ticket.

Satellite Communications

This discussion would not be complete without a brief overview of satellite communications. Since Katrina, the satellite industry has looked in a big way at disaster recovery. When a widespread disaster occurs, as with Katrina, a major earthquake, or the Christmas 2004 tsunami in the Indian Ocean, satellite might be the only show going in the immediate aftermath.

It really pays to check out the advantage if you live in a region prone to such disasters. Also, like the previous technologies discussed above, satellite communications have taken leaps and bounds over the last few years in terms of feature richness.

There are a few disadvantages. Satellite is essentially microwave radio aimed upward—it uses essentially the same frequencies. As such, the same rules hold true regarding the tendency to wash out in heavy rains.

There are also two times every year when the satellite receiver will be aimed directly at the sun—right around the spring or fall equinox. At that time there will be a brief outage. These outages can be planned for, however, because the service provider will know precisely when they will occur.

With regard to equipment, satellite has metamorphosed from 16-foot dishes in years past to pizza pan dishes that fit on the side of a building. In fact, the case of global position systems (GPS) and freight-tracking technologies, equipment often fits in your hand. (Consider how handy it might be to have GPS in the aftermath of a tsunami or hurricane when all the street signs and landmarks have literally been washed away!)

Before using satellite, consult with the vendor on propagation delays. It still takes about a quarter of a second to get to a satellite and back because of the limitation of the speed of light. This might have a noticeable performance delay depending on the data protocol you are using.

If these delays exist, however, they can often be compensated for by the satellite provider or through the use of various outboard technologies. Nobody can increase the speed of light, but it is possible, for instance, to send more data before expecting a response, thereby increasing performance.

Companies such as Direct PC (I think they are called HughesNet now) actually use satellite for Internet access, so obviously the performance issue some of the latency issue, have been addressed.

Point-to-Multipoint Systems

Up until now, the challenges of setting up an effective wireless primary and disaster recovery system have always involved trade-offs between cost, complexity, reliability, and time (such as in licensing). This makes the newest entrant of the technologies discussed in this article, point-to-multipoint (P-MP) radio systems, not only an exciting new development but also Leo’s technology of choice.

P-MP marries microwave radio technology to enterprise and makes delivering technologies of all types faster and easier than ever before. This technology is also becoming widely used as a disaster recovery technology. Indeed, two users we are familiar with, both of which are county governments, have scrapped their AT&T T1s altogether and now use P-MP as the primary technology, with a few T1s held back as the backup path.

Here is how the technology works. P-MP products operate in the 900 MHz; 2.4, 5.1, 5.2, 5.4, and 5.7 GHz frequency bands. Since these frequencies are lower than many microwave frequencies, "wash out" and restrictions on range are not as much of an issue.

Like the other technologies, a variety of interfaces are available, including T1 and Ethernet. Start-up costs are low. We have seen central unit costs as low as $2500 and "per rooftop" costs in the $500 range. Typically a small antenna is installed on the roof (about a foot long and 4 inches wide); or in cases where the range is greater, a small dish about the size of a satellite TV antenna.

Furthermore, the equipment does not require a FCC license and is streamlined, with the radio built into the antenna in the same 12" x 4" x 2’ unit on the roof. It’s incredibly easy to get up and running. Most P-MP platforms also include the most common interfaces that enable them to easily integrate with standard network management tools and systems.

Obviously any system that traverses an airwave should be encrypted. Look for a system that provides security with over-the-air DES (data encryption standard) encryption or AES (advanced encryption standard) encryption capabilities. Take a good look at security when using any wireless solution!

P-MP systems serve numerous enterprise locations of virtually any size and can be used for distances up to 15 miles (24 kilometers). Point-to-point links can traverse over greater distances by augmenting the antennas at both ends (a dish similar to a microwave dish is used in these cases) and in fact approximate the "mountain top to mountain top" links described previously.

Most P-MP systems require line of sight, although some of the ones that use the lower frequencies (such as 900 MHz) do not. The lower frequencies however generally limit throughput to T1 speeds (1.544 Mbs)... if you are lucky.

To summarize, P-MP systems, in the opinion of this humble writer, represent the best trade-off of cost, performance, ease of use, and variety of interfaces available to the enterprise user seeking disaster recovery and network availability.

Summary

I like the P-MP systems. First, I have never cared for monopolies like the phone company and I like to have the widest diversity of choice possible.

From a disaster recovery perspective, these systems rock. Imagine a unit you can have shipped in overnight that can reestablish a T1, nail up an Ethernet link, or provide you upward of 10 times the capacity of a point-to-point T1. All without a license, or the need to go to a school, or the need to add yet another black box to interface with your network.

Check them out. You can get more info at AirCanopy.net and myriad other sources. Until next time, have a super and disaster-resilient year!



jueves, 7 de junio de 2007

Creating a Home or Small Office Server Using Apple's AirPort Extreme Base Station

The latest generation of Apple’s AirPort Extreme base station truly deserves the word extreme in its name. It is one of the first 802.11n wireless routers on the market and it delivers incredible performance when paired with a Mac (or PC) featuring 802.11n support, as well as a much wider range than previous models. Speed isn’t the only reason why the $179 base station is extreme. The device is actually much more than a wireless router. It includes a USB port that can be used to wirelessly share printers or even hard drives.

For home users or small business owners, this amazing combination of features for a relatively low price point is a great tool. Many small businesses need a solution for sharing files, but don’t have the need or resources to set up a full-blown server. While network-attached storage devices are also an option, they are often more expensive, and many don’t offer easy Mac-friendly setup tools.

Easy Setup

Installing the new base station is a fairly simple process—even more so than Apple’s previous base stations. Like past preceding models, the base station is configured by using an application rather than a web-based interface, which is used with most home or small office routers. Unlike earlier base stations, which have shipped with both a setup assistant and more advanced administration tool, the newest AirPort Extreme base station ships with a single tool called AirPort Utility (a version is included for both Mac OS X and Windows) that can also be used to manage previous generations of base stations including the highly portable AirPort Express. Although it is a single utility, AirPort Utility offers both a guided setup interface that is easy to navigate, even for users with limited technical or network experience.

Sharing Hard Drives

Using the AirPort Extreme base station as a file server provides a great solution for families as well as small office users. It can provide a space for shared files as well as a space for backups. Using a USB hub, you can attach multiple hard drives, enabling you to easily expand your storage needs.

Although the base station can act as a file server for attached hard drive, you will need to first format the drive using a computer. The base station can access drives formatted by either a Mac or Windows, but they must be formatted as Mac OS X Extended or FAT 32, respectively. Once a drive is formatted, simply attach it to the base station’s USB port (or an attached USB hub).

Attached drives are automatically detected and shared. By default, hard drives are shared with the same password used to configure the base station. You can, however, choose to use a separate password to allow access to the shared drive(s). (This is a good idea if you don’t want your kids or employees to be able to change the base station’s configuration.) Or you can set up individual user accounts. You can also choose to allow guest access so that anyone on your network has access to the disk (and you can specify whether guests have read and write or read-only access).

When using user accounts, you also can specify whether users have no access, read-only access, or full read and write access to shared drives. If you use user accounts, a Users folder is created at the root level of the shared drive. Each user account that you create is assigned a user folder that only that user can access. This procedure provides an easy solution to giving users private storage space and preventing one user from deleting another user’s files.

In addition to sharing files on the network (wired and wireless) that is created by the base station, there is the option of sharing files over the base station’s WAN port (the one that connects to your Internet connection). If your base station is part of a larger network (such as in a school or business environment), this is a great feature because it allows you to enable access to other people connected to that larger network. It can also be used if you are not part of a larger network and want to access your shared hard drive from another location via the Internet (you simply need the IP address that the base station receives from your Internet provider, which can be found on then Internet tab of its configuration dialog box).

Choosing to make your shared hard drive available over the Internet poses security risks, however. Depending on your Internet connection, simply turning this option on can make your shared drive visible to a large number of people using the same provider. This is why you should never enable the Advertise Disks Globally option using Bonjour (Apple’s zero-configuration network technology) if you choose to share your hard drive in this manner. Ensure that guest access is turned off; you should use either a separate disk password or user accounts to secure hard drives, which can offer some security for your base station’s configuration password if someone uses a password-cracking tool to access your shared hard drive. If you do use this option, you should also use it sparingly and disable it whenever you don’t need to provide remote access.

Being a cross-platform device, the base station can share disks with both Macs and Window PCs (regardless of the format of the drive). This is a great touch because many homes and offices have both Macs and PCs. It truly makes the base station a one-stop solution, even if you have a single Mac and multiple PCs or—even no Mac at all.

Connecting to Shared Disks

Apple includes an AirPort Disk Utility that can be used to access shared hard drives. You can also find them by browsing your network as you would to find other file servers or computers with file sharing enabled. The AirPort Disk Utility includes a menu bar indicator for Mac OS X and, by default, automatically detects (and attempts to connect to) any shared hard drives. If you choose to manually browse for shared drives and you are using a password instead of user accounts, you are prompted for a user name and password when connecting. Simply leave the user name section of the dialog box blank.

Sharing Printers

Like AirPort Express, the current AirPort Extreme base station can be used to share attached USB printers. The process is even easier than sharing a hard drive. Simply attach the printer, and the base station makes it available. Computers can locate and access the printer via Bonjour. You can elect to set up the printer using Mac OS X’s Printer Setup Utility or choose it from the Print dialog box’s Printer menu (you’ll find it in the Bon Jour Printers submenu). Windows computers can locate and use printers via Apple’s Bonjour for Windows.

As with shared hard drives, you can also elect to share the printer over the base station’s WAN port, and the base station can support multiple printers through the use of a USB hub. A mix of shared printers and hard drives is also supported when using a USB hub.


Introduction to Technical Analysis

Technical analysis. These words may conjure up many different mental images. Perhaps you think of the stereotypical technical analyst, alone in a windowless office, slouched over stacks of hand-drawn charts of stock prices. Or, maybe you think of the sophisticated multicolored computerized chart of your favorite stock you recently saw. Perhaps you begin dreaming about all the money you could make if you knew the secrets to predicting stock prices. Or, perhaps you remember sitting in a finance class and hearing your professor say that technical analysis "is a waste of time." In this book, we examine some of the perceptions, and misperceptions, of technical analysis.

If you are new to the study of technical analysis, you may be wondering just what technical analysis is. In its basic form, technical analysis is the study of past market data, primarily price and volume data; this information is used to make trading or investing decisions. Technical analysis is rooted in basic economic theory. Consider the basic assumptions presented by Robert D. Edwards and John Magee in the classic book, Technical Analysis of Stock Trends:

  • Stock prices are determined solely by the interaction of demand and supply.
  • Stock prices tend to move in trends.
  • Shifts in demand and supply cause reversals in trends.
  • Shifts in demand and supply can be detected in charts.
  • Chart patterns tend to repeat themselves.

Technical analysts study the action of the market itself rather than the goods in which the market deals. The technical analyst believes that "the market is always correct." In other words, rather than trying to consider all the factors that will influence the demand for Gadget International's newest electronic gadget and all the items that will influence the company's cost and supply curve to determine an outlook for the stock's price, the technical analyst believes that all of these factors are already factored into the demand and supply curves and, thus, the price of the company's stock.

Students new to any discipline often ask, "How can I use the knowledge of this discipline?" Students new to technical analysis are no different. Technical analysis is used in two major ways: predictive and reactive. Those who use technical analysis for predictive purposes use the analysis to make predictions about future market moves. Generally, these individuals make money by selling their predictions to others. Market letter writers in print or on the web and the technical market gurus who frequent the financial news fall into this category. The predictive technical analysts include the more well-known names in the industry; these individuals like publicity because it helps market their services.

On the other hand, those who use technical analysis in a reactive mode are usually not well known. Traders and investors use techniques of technical analysis to react to particular market conditions to make their decisions. For example, a trader may use a moving average crossover to signal when a long position should be taken. In other words, the trader is watching the market and reacting when a certain technical condition is met. These traders and investors are making money by making profitable trades for their own or clients' portfolios. Some of them may even find that publicity distracts them from their underlying work.

The focus of this book is to explain the basic principles and techniques for reacting to the market. We do not attempt to predict the market, nor do we provide you with the Holy Grail or a promise of a method that will make you millions overnight. Instead, we want to provide you with background, basic tools, and techniques that you will need to be a competent technical analyst.

As we will see when we study the history of technical analysis, the interest in technical analysis in the U.S. dates back over 100 years, when Charles H. Dow began writing newsletters that later turned into the Wall Street Journal and developing the various Dow averages to measure the stock market. Since that time, much has been written about technical analysis. Today, there are entire periodicals, such as Technical Analysis of Stock and Commodities and the Journal of Technical Analysis, devoted to the study of the subject. In addition, there are many articles appearing in other publications, including academic journals. There are even a number of excellent books on the market. As you can see from this book's extensive bibliography, which is in no way a complete list of every published item on technical analysis, a massive quantity of material about technical analysis exists.

So, why does the world need another book on technical analysis? We began looking through the multitude of materials on technical analysis a few years ago, searching for resources to use in educational settings. We noticed that many specialized books existed on the topic, but there was no resource to provide the student of technical analysis with a comprehensive summation of the body of knowledge. We decided to provide a coherent, logical framework for this material that could be used as a textbook and a reference book.

Our intent in writing this book is to provide the student of technical analysis, whether a novice college student or an experienced practitioner, with a systematic study of the field of technical analysis. Over the past century, much has been written about the topic. The classic works of Charles Dow and the timeless book by Edwards and Magee still contain valuable information for the student of technical analysis. The basic principles of these early authors are still valid today. However, the evolving financial marketplace and the availability of computer power have led to a substantial growth in the tools and information available to the technical analyst.

Many technical analysts have learned their trade from the mentors with whom they have worked. Numerous individuals who are interested in studying technical analysis today, however, do not have access to such a mentor. In addition, as the profession has advanced, many specific techniques have developed. The result is that the techniques and methods of technical analysis often appear to be a hodge-podge of tools, ideas, and even folklore, rather a part of a coherent body of knowledge.

Many books on the market assume a basic understanding of technical analysis or focus on particular financial markets or instruments. Our intent is to provide the reader with a basic reference to support a life-long study of the discipline. We have attempted to provide enough background information and terminology that you can easily read this book without having to refer to other references for background information. We have also included a large number of references for further reading so that you can continue learning in the specialized areas that interest you.

Another unique characteristic of this book is the joining of the practitioner and the academic. Technical analysis is widely practiced, both by professional traders and investors and by individuals managing their own money. However, this widespread practice has not been matched by academic acknowledgment of the benefits of technical analysis. Academics have been slow to study technical analysis; most of the academic studies of technical analysis have lacked a thorough understanding of the actual practice of technical analysis. It is our hope not only to bring together a practitioner-academic author team but also to provide a book that promotes discussion and understanding between these two groups.

Whether you are a novice or experienced professional, we are confident that you will find this book helpful. For the student new to technical analysis, this book will provide you with the basic knowledge and building blocks to begin a life-long study of technical analysis. For the more experienced technician, you will find this book to be an indispensable guide, helping you to organize your knowledge, question your assumptions and beliefs, and implement new techniques.

We begin this book with a look at the background and history of technical analysis. In this part, we discuss not only the basic principles of technical analysis but also the technical analysis controversy—the debate between academics and practitioners regarding the efficiency of financial markets and the merit of technical analysis. This background information is especially useful to those who are new to technical analysis and those who are studying the subject in an educational setting. For those with more experience with the field or with little interest in the academic arguments about market efficiency, a quick reading of this first part will probably suffice.

In the second part of the book, we focus on markets and market indicators. Chapter 5, "An Overview of Markets," provides a basic overview of how markets work. Market vocabulary and trading mechanics are introduced in this chapter. For the student who is unfamiliar with this terminology, a thorough understanding of this chapter will provide the necessary background for the remaining chapters. Our focus in Chapter 6, "Dow Theory," is on the development and principles of Dow Theory. Although Dow Theory was developed a century ago, much of modern-day technical analysis is based on these classic principles. A thorough understanding of these timeless principles helps keep the technical analyst focused on the key concepts that lead to making money in the market. In Chapter 7, "Sentiment," we focus on sentiment; the psychology of market players is a major concept in this chapter. In Chapter 8, "Measuring Market Strength," we discuss methods for gauging overall market strength. Chapter 9, "Temporal Patterns and Cycles," focuses on temporal tendencies, the tendency for the market to move in particular directions during particular times, such as election year cycles and seasonal stock market patterns. Because the main fuel for the market is money, Chapter 10, "Flow of Funds," focuses on the flow of funds. In this chapter, we look at measures of market liquidity and how the Federal Reserve can influence liquidity.

The third part of the book focuses on trend analysis. In many ways, this part can be thought of as the heart of technical analysis. If we see that the market is trending upward, we can profitably ride that trend upward. If we determine that the market is trending downward, we can even profit by taking a short position. In fact, the most difficult time to profit in the market is when there is no definitive upward or downward trend. Over the years, technical analysts have developed a number of techniques to help them visually determine when a trend is in place. These charting techniques are the focus of Chapter 11, "History and Construction of Charts." In Chapter 12, "Trends—The Basics," we discuss how to draw trend lines and determine support and resistance lines using these charts. In Chapter 13, "Breakouts, Stops, and Retracements," we focus on determining breakouts. These breakouts will help us recognize a trend change as soon as possible. We also discuss the importance of protective stops in this chapter. Moving averages, a useful mathematical technique for determining the existence of trends, are presented in Chapter 14, "Moving Averages."

The fourth part of this book focuses on chart pattern analysis—the item that first comes to mind when many people think of technical analysis. In Chapter 15, "Bar Chart Patterns," we cover classic bar chart patterns; in Chapter 16, "Point-and-Figure Chart Patterns," we focus on point-and-figure chart patterns. Short-term patterns, including candlestick patterns, are covered in Chapter 17, "Short-Term Patterns."

Part V, "Trend Confirmation," deals with the concept of confirmation. We consider price oscillators and momentum measures in Chapter 18, "Confirmation." Building upon the concept of trends from earlier chapters, we look at how volume plays a role in confirming the trend, giving us more confidence that a trend is indeed occurring.

Next, we turn our attention to the relationship between cycle theory and technical analysis. In Chapter 19, "Cycles," we discuss the basic principles of cycle theory and the characteristics of cycles. Some technical analysts believe that cycles seen in the stock market have a scientific basis; for example, R. N. Elliott claimed that the basic harmony found in nature occurs in the stock market. Chapter 20, "Elliott, Fibonacci, and Gann," introduces the basic concepts of Elliott Wave Theory, a school of thought that adheres to Elliott's premise that stock price movements form discernible wave patterns.

Once we know the basic techniques of technical analysis, the question becomes, "Which particular securities will we trade?" Selection decisions are the focus of Chapter 21, "Selection of Markets and Issues: Trading and Investing." In this chapter, we discuss the intermarket relationships that will help us determine on which market to focus by determining which market is most likely to show strong performance. We also discuss individual security selection, measures of relative strength, and how successful practitioners have used these methods to construct portfolios.

As technical analysts, we need methods of measuring our success. After all, our main objective is making money. Although this is a straightforward objective, determining whether we are meeting our objective is not quite so straightforward. Proper measurement of trading and investment strategies requires appropriate risk measurement and an understanding of basic statistical techniques. The last couple of chapters help put all the tools and techniques we present throughout the book into practice. Chapter 22, "System Design and Testing," is devoted to developing and testing trading systems. At this point, we look at how we can test the tools and indicators covered throughout the book to see if they will make money for us—our main objective—in the particular way we would like to trade. Finally, Chapter 23, "Money and Risk Management," deals with money management and avoiding capital loss.

For those who need a brush-up in basic statistics or wish to understand some of the statistical concepts introduced throughout the book, Dr. Richard J. Bauer, Jr.(Professor of Finance, Bill Greehey School of Business, St. Mary's University, San Antonio, TX) provides a tutorial on basic statistical techniques of interest to the technical analyst in Appendix A, "Basic Statistics"

For those who are unfamiliar with the terms and language used in trading, Appendix B provides brief definitions of specific order types and commonly used terms in order entry.

As with all skills, learning technical analysis requires practice. We have provided a number of review questions and problems at the end of the chapters to help you begin thinking about and applying some of the concepts on your own. The extensive bibliography will direct you to further readings in the areas of technical analysis that are of particular interest to you.

Another way of honing your technical skills is participating in a professional organization that is focused on technical analysis. In the United States, the Market Technicians Association (MTA) provides a wide variety of seminars, lectures, and publications for technical analysis professionals. The MTA also sponsors the Chartered Market Technician (CMT) program. Professionals wishing to receive the prestigious CMT designation must pass three examinations and adhere to a strict code of professional conduct. More information about the MTA and the CMT program may be found at the web site: www.mta.org. The International Federation of Technical Analysts, Inc. (IFTA) is a global organization of market analysis societies and associations. IFTA, and its member associations worldwide, sponsor a number of seminars and publications. IFTA offers a professional certification, the Certified Financial Technician, and a Masters-level degree, the Master of Financial Technical Analysis. The details of these certifications, along with contact information for IFTA's member associations around the world, can be found at their web site: www.ifta.org.

Technical analysis is a complex, ever-expanding discipline. The globalization of markets, the creation of new securities, and the availability of inexpensive computer power are opening even more opportunities in this field. Whether you use the information professionally or for your own personal trading or investing, we hope that this book will serve as a stepping-stone to your study and exploration of the field of technical analysis.

lunes, 4 de junio de 2007

Technical Advances Make Your Passwords Practically Worthless

RESUME

Passwords are supposed to be kept secret, but due to continuing advances in technology, they are becoming weaker every day. The threat has grown to the point where using a password as the sole form of authentication provides you with almost no protection at all. Randy Nash outlines the dangers facing passwords and suggests some additional measures needed to protect even ordinary digital assets.

Your password is a form of authentication, or identification, used to control access to a given resource. Passwords are supposed to be kept secret, thereby controlling access to important information. But due to continuing advances in technology, passwords are becoming weaker every day. The threat has grown to the point where using a password as the sole form of authentication provides you with almost no protection at all. Cracking a password has become a task that can be accomplished in minutes instead of weeks or months. Additional measures need to become commonplace now to protect even ordinary digital assets.

Why Your Password is at Risk

Your password is used to identify you and provide access to your computer resources. It is a form of authentication that is necessary to determine what rights you have within a system. Digital authentication is generally broken down into three classifications:

  • Something you know: your password, a pass phrase, or your PIN number.
  • Something you have: a security token or a smart card.
  • Something you are: biometrics (such as a fingerprint or a retinal scan).

When used as the sole form of authentication, passwords are generally considered the weakest form of authentication. Why? Let's face it; most folks tend to get lazy with their passwords:

  • They devise simple passwords, such as the names of their pets or the names of their favorite sports teams.
  • They use the same password for multiple systems.
  • They write their passwords on sticky notes and stick them next to their computers.

Once your password is no longer secret, it no longer uniquely identifies you – which means it no longer protects access to your valuable information. Unfortunately, even if you do protect your password, there are other ways of obtaining it.

Sniffing Around in Your Data

Bad guys can sniff passwords as they are transmitted over the network by using specialized hardware or software that allows them to access network traffic as it's transmitted over the wire.

Sniffing can provide direct access to passwords if they are transmitted in the clear (without some form of encryption). Even today there are many technologies, applications, and protocols that transmit this sensitive information in clear text without any form of protection. Some examples are:

  • Websites (HTTP)
  • Email (POP)
  • Telnet and FTP

By sending this authentication in clear text, it is immediately available for exploitation without any further level of effort.

Encryption: Speaking in Tongues

One method of protecting passwords is to apply cryptography to encode the password so it cannot be observed in a readable form. There are many different methods of encrypting passwords, each with varying levels of protection and security. Some more commonly used examples are:

  • Windows LAN Manager and NT LAN Manager hash (LM and NTLM): NTLM is a Microsoft authentication protocol that uses a challenge-response sequence requiring the transmission of three messages between the client and the server.
  • NTLM v2: An updated version of NTLM that addresses weaknesses in the original implementation.
  • Kerberos: Kerberos is a network authentication protocol that allows individuals communicating over an insecure network to prove their identity to one another in a secure manner.

Each method works by simply applying a one-way cryptographic algorithm to the password, which creates an encrypted hash. In simpler terms, the algorithm is a form of very complex math that is used to create an encoded version of your password (a password hash). There is generally thought to be no way to mathematically reverse the math to get the original password from the encrypted hash, thus it is considered a one-way process. This encrypted hash can still be sniffed from the network, but it cannot be used in the encrypted form.

Passwords are usually stored in a local system database. This is necessary to allow the system a method of verifying passwords when a user is trying to gain access. These passwords are usually stored in an encrypted form based on the cryptographic hash previously discussed. Unfortunately, this database represents the proverbial pot of gold for anyone wishing to gain access to your information systems.

Various computer operating systems store their passwords in some well-known standard locations. Many Unix systems store their passwords in the location \etc\passwd, whereas Windows stores them in a local security accounts manager (SAM) database. If attackers gain access to these files, they can easily launch attacks against this cache of information in their efforts to obtain (or crack) the passwords.

Attacking with Dictionaries and Brute-Force

Password attacks have taken many forms, the first of which was probably as simple as trying to guess passwords. The simplest form of guessing passwords was accomplished by manually attempting to log into a computer system and taking your best guesses at the password. Many people choose simple passwords that are easy for them to remember – but that makes them easy for others to figure out as well.

People may also forget or neglect to change default system or account password. A quick Google search for default passwords provides extensive listings of default passwords for various systems. Manual password guessing is very slow and tedious, and is further complicated by the fact that many computer systems lock out an account after a number of failed login attempts. The bad guys have reacted to this challenge by automating their password-cracking attacks.

But how is guessing automated? There are two common methods of automated guessing:

  • Dictionary
  • Brute-force

A dictionary attack uses a dictionary of common words and names as the source for guessing passwords. Again, many people choose simple passwords that are easy to remember. This means they will choose common words, names, places, and so on. Dictionaries have been created using these common words and they are available for download and immediate use.

A brute-force attack is a little more complex and can take much longer to execute. In simple terms, a brute-force attack attempts all possible character combinations until if finds a match. This total number of combinations is referred to as the keyspace. To know how many possibilities need to be calculated, we need to take the number of allowable characters (y) raised to the power of the password length (x) in the form of yx. As an example, let’s look at using just uppercase alphabetic characters to create an eight-character password. The following example shows 26 characters raised to the power of 8:

uppercase alphabet

26

password length:

8

keyspace (268)

208,827,064,576

Now, what happens if we expand this to all possible characters on the standard keyboard? That’s 96 characters:

All characters

96

password length

8

keyspace (968)

7,213,895,789,838,340

As the possible character set is increased, the potential number of combinations increases exponentially. This means the computational time to crack these passwords increases in proportion. NIST Special Publication 800-63 provides excellent discussion on password strength and how it is affected by the password attributes (password length and possible character sets).

So, realistically, how long might it take to crack some of these passwords using a brute-force attack? Instead of recreating all the math and scenarios here, I’ll refer you to an article (How Long Does It Take to Crack Passwords?) that provides a detailed explanation of the time breakdown. According to this article, it would take up to 2.1 centuries to evaluate the entire keyspace associated with an eight-character password (based on the entire character set on a standard keyboard). Taking a couple of centuries to crack a password is of no value to anyone. This was an obvious weakness to the brute-force attack, so the bad guys developed new techniques, including distributed computing and Cryptanalytic Time-Memory Trade-Off.

Using Distributed Computing to Become Faster

One of the first attempts at developing faster attack methods was the use the distributed computing model. There are many well-known examples of this technique, such as SETI@Home and Folding@Home. These projects make use of a screen saver that uses dormant computer cycles to perform complex calculations. This concept was also used in the creation of a distributed password cracker known as distributed john or djohn. An excerpt from that site explains the process:

"With Distributed John (djohn) you can crack passwords using several machines to get passwords sooner than using a single machine. The cracking in itself is done by John the Ripper and djohn's server (djohnd) divides the work in work packets and coordinates the effort among the clients (djohn), which are the ones who do the work."

This approach gave hackers almost unlimited cracking power. They were limited by only the number of computers that could be assigned to the task.

(e)Using Cryptanalytic Time-Memory Trade-off to Become More Efficient

Eventually the idea arose that these lengthy calculations need not be done repetitively. In other words, why do the same task over and over? Why not do it one time and save the results for re-use? This concept eventually led to the concept of Cryptanalytic Time-Memory Trade-Off. I don’t have the space here to discuss the concept in any depth, but it needs to be mentioned because it later led to the implementation of Rainbow Tables for password cracking. Rainbow tables use generated password hashes stored in a lookup table. Thus, they need to be created only one time and then stored for future use. But again, there are difficulties with this approach:

  1. There is still a huge time requirement for creating the tables. This has again been addressed with the application of distributed processing for Rainbow Tables.
  2. The storage requirements for this sort of project are immense (on the order of hundreds of gigabytes). Until recently, this would have been very cost prohibitive. Now, however, it’s possible to buy half-terabyte drives for slightly over $100.

What Threats Lie Ahead?

As technology improves and new ideas take hold, risk will increase accordingly. Some of the biggest threats include:

  • Moore's Law: Moore’s Law states the number of transistors on a chip doubles about every two years. This leads to faster and more powerful CPUs, which will be used to perform calculations ever faster. We also have dual-core processors which multiply the processing power that can be applied to this task. Intel has even boasted about the development of an experimental 80-core CPU. While this isn’t available today, it is expected to be available within five years.
  • High-end graphics cards: Today’s graphics cards are composed of multiple core processors and loaded with their own RAM. ATI and nVidia have each released development kits which allow for the development of programs that can leverage these powerful processors.
  • Gaming consoles: Even more powerful than the high-end graphics cards are the PlayStation 3 gaming consoles. These systems are now internet-connected and provide even more powerful processing power. This technology has already been applied to the Folding@Home projects. A comparison of performance can be seen here. A quick glance shows the power of these two platforms in comparison to the various PC platforms. It’s only a matter of time before these techniques are applied to password cracking and other crypto-based tasks.

What’s the Next Line of Defense?

I think it's clear that the next step should be the implementation of some form of two-factor authentication. While there are many ways to accomplish this, the cheapest and most cost effective way is to distribute tokens such as the RSA SecurID. This is one of the better-known solutions and may not be cost effective for small operations. However, Paypal recently implemented a similar solution (Paypal Security Key) that they are providing to customers for a one-time fee of $5 USD.

There are other methods and products as well, but businesses and government alike should begin evaluating their options. The threat is growing every day, and soon a password alone will not provide sufficient protection.

Búsqueda personalizada