A Windows Lament

A Windows Lament

I’ve been writing software for longer than I’d like to remember.  I began writing code in machine language because there were no assemblers available for my 2MHz 8080, and even if there were, I still had to key in the program through the front panel. I later acquired Altair DOS and a cassette-based assembler called Package-II. When the first version of Basic appeared, I had to integrate my assembly code with Basic, which meant establishing base page vectors to call my code from the Basic interpreter. I recall several conversations early on with two developers, Bill and Paul, who told me that the problems I was experiencing were my fault, and that it couldn’t possibly be their interpreter. Bill was the most arrogant. He told me that I didn’t know what I was talking about, and at one point hung up on me.

I’ve been writing Windows software since its initial release. Since that time I’ve written hundreds applications, DLLs, systems components, and device drivers. I’ve watched Windows grow from its inception to the current version, Windows 8.  It has certainly helped my career, and has provided me with a significant source of income over the years.

I was never religious about the operating system.  To me, it was always just an enabler; a means to an end. The public rift between Microsoft and IBM and the nauseating back-and-forth between Windows and OS/2 supporters never made sense to me. Applications should have been the focal point, not the operating system. IBM’s failure to recognize this distinction probably contributed to OS/2’s demise. This fact was not lost with Microsoft, however. In addition to hardening the underlying operating system, they began to focus on the development of applications such as Word, Publisher, PowerPoint, and Project. Today, those applications represent the ‘gold standard’ of office software, and generate a significant amount of revenue for Microsoft. Competing products such as Lotus Symphony and OpenOffice have always fallen woefully short both in quality and functionality.

Over time, Windows has become more secure, though it certainly has a long way to go. Earlier versions that were shipped with the firewall and DEP disabled demonstrated a lack of foresight and due diligence. In spite of its weaknesses Windows ultimately achieved dominance, due in large part to the usefulness of the applications that were available. The development of the Windows Server product line helped extend and augment the capabilities of Windows facilitating its entry into the enterprise. Bill Gate’s directive of “Windows everywhere” was instrumental in creating the entire personal computer industry. One of the reasons for the phenomenal success of the Windows platform was the standardization of the user interface. It allowed users familiar with one application to quickly learn to how use another. As Windows continued to evolve, the bulk of user interface remained basically the same. Anyone that knew how to use a previous version of Windows could easily become comfortable with the newest version. There were always some differences, but the core user interface remained consistent. That is, until Windows 8.

You have to wonder what they were thinking. As a software developer, I understand the motivation to support touch-screen devices, tablets, and desktops with a common code base and to provide a somewhat consistent user interface across diverse devices. However, radically changing the user interface, and in particular removing the familiar Start button, was as crazy as it would have been for Coca-Cola to drop Coke™ or change the font of their logo, or for Disney to drop Mickey Mouse©™.

Fortunately, we’re hearing that Microsoft plans to revamp the Windows 8 desktop and user interface, and perhaps bring back the Start button in one form or another. This is good news. I’m glad to see they’ve gotten the message. But you have to wonder how a company with perhaps some of the brightest technical folks on the planet could make such an obvious blunder. It’s not that they didn’t have any warning. Early reports on user experience were very critical of the Metro interface.  Developers expressed their dissatisfaction with the technology, and beta users were not happy with it either. Despite the early warning signals, Microsoft continued to push the new paradigm assuming that users and developers would eventually follow their lead.

The introduction of a radical new technology is never easy. It takes persistence and courage to bring innovation to the marketplace. The challenge is do it with a minimal amount of disruption while continuing to build consensus. I would have wise to have Windows 8 ship with the ability to configure the style of the user interface. It would have permitted those who were more comfortable with the traditional desktop to keep it, and allow users interested in trying the new stuff to choose the new style.

It takes just as much courage to drag along the old technology as it does to abandon it; and perhaps more. I hope this is a lesson learned.

—Steve

Why cloud computing is not for everyone

I’m often invited by CEOs and CFOs to explain what all the excitement is about this thing called cloud. They’ve all heard the term, but they’re not sure what it really means. They’ve been told that they can save a lot of money by having their employees use the cloud for their computing needs, and in today’s business climate, who’s not interested in saving money? They ask me to explain in simple terms just what it means to move to the cloud, and to find out if it really makes sense for them. They want to know why some companies have been reluctant to adopt cloud computing. “If it’s so good”, they ask, then “why isn’t everyone jumping in?”

There’s nothing new about the cloud. Early mainframes provided cloud-based computing services. RJE, HASP, and TSO allowed users to submit jobs for processing from a remote terminal. With a few keystrokes, anyone with a terminal and a connection to the system could leverage the enormous computing power of an IBM 360 or 370. While somewhat clumsy, these technologies did enable the delivery of large amounts of computing power to remote users in a cost-effective manner. Disk storage was very expensive so data was maintained on the mainframe. Users had no local storage, and if you wanted your data to persist you were charged dearly for the space. The basic elements of cloud computing – centralized computing resources, standardized application services, data storage, and remote connectivity – were already in place.

Years later, bandwidth, storage capacity, and compute power have increased exponentially. Simple character-based protocols have been replaced with rich network traffic. We’re able to process large amounts of data with powerful processors, faster memory, and almost unlimited amounts of storage. High speed routers, coax and fiber help deliver this rich content effectively over great distances. Books, magazines, and newspapers are quickly becoming obsolete, replaced with online content. Music and video is routinely streamed to the home or mobile phone, and almost no one uses a dictionary or encyclopedia. The data is maintained in the cloud, and the same content is delivered to everyone. It makes perfect sense to maintain a single repository in the cloud, providing the infrastructure is able to properly scale. The centralized data can be accessed from anywhere, allowing users to access their content on a variety of devices from almost anywhere in the world.

The cloud is much more than a data repository, however. Today’s clouds offer not only virtualized desktops and servers, but resources such as CPUs, memory, LANs, SANs, and networks that can be combined and configured to provide complete solutions in the cloud. In addition to platform services, the cloud can deliver SaaS applications such as word processing and graphics design on an ‘as needed’ or rental basis. Applications in the cloud can be easily updated in one place, and the new version deployed users without requiring them to do anything. A cloud also makes the perfect platform for a mobile device deployment where security, compliance, governance, and lifecycle management are critical. IT and business processes can be automated to reduce costs, and the cloud can host proactive management tools as well as rules-based analytics to help lower operating costs.

With all of these advantages, why isn’t cloud being embraced by more companies? Why have companies resisted moving their applications en masse to the cloud? Let’s take a look at some of the key issues why cloud has just not taken off.

First, all clouds are not the same.

A private cloud is installed and hosted on company premises. It is configured and maintained by the company’s IT organization. They establish and enforce security and access rights. A private cloud may or may not allow an incoming VPN for management or external access. The equipment is usually purchased, capitalized, and maintained by the internal staff. Because the private cloud remains on the customer premises, it can utilize existing directory and authentication servers on the local network and can access internal resources.

A public cloud provides access to virtual operating system images, instances, and applications for a fee, similar to AWS. There are several ways to use a public cloud, but so far, most seem to be using it to spin up a copy of Windows or Linux and kick the tires. Depending on the level of access, users may be able to store their files on a SAN or virtual file system. At the lowest price point, instances might get swapped out or destroyed without warning. Since Virtual Machines are often shared across a large number of subscribers, performance can suffer significantly as instances are swapped in an out among users, and response times may not be deterministic. Communications between VMs on virtual networks can be slow due to heavy network traffic and network isolation requirements.

Another type of cloud may be what I refer to as a local data cloud. A local data cloud contains one or more elements of both a private and a public cloud. For example, suppose a company wants to offer a public cloud where outside users can execute programs, but company employees can log on to the public cloud to perform support and administration tasks. The cloud could be configured to run the users’ virtual machines in the public address space but require those users to be authenticated with company’s internal authentication server. Another example might be the ability to permit users with an elevated level of entitlement to access certain resources on the intranet while those resources would remain invisible to others.

A compelling aspect of the local data cloud is a cloud that provides a single, managed application stack, while at the same time preventing any data from moving outside the users’ local environment. Data remains on the user’s system or on an internal repository, but never gets past the firewall. In this model, companies and users can leverage the benefit of single-sourced, managed applications in the cloud but also enforce local authentication and IT policy compliance. Data remains under the control of the company and cannot be accessed by anyone without proper credentials or access rights. The uncertainties of having confidential or sensitive data stored or exposed anywhere outside a company’s intranet is perhaps the most important reason why companies have resisted moving to the cloud. A company’s data is perhaps their most valuable asset, and they simply don’t want it stored anywhere outside their company.

 

 

What’s the near future look like for mobile?

Look around. Everyone is carrying a mobile device, from the texting zombies in the supermarket and mall to the men and women conducting their business from a table at the local Starbucks or Panera Bread. Work doesn’t always have to be done at the office or from home. Sure, you can’t normally print documents or take advantage of a large display, but you can interact with clients on websites and through email and social media wherever you are. Mobile devices and ubiquitous connectivity have enabled users to bring the workplace and data to them. This is only the tip of the iceberg. What can we expect to see in the very near future?

  • More and more will work from home, from mobile offices or locations.
  • Work will more often be done at off-hours while workers juggle work-family balance.
  • Wireless broadband connectivity will continue to be intermittent and have spotty coverage so devices will need to have intelligent caching of business data and knowledge.
  • Users will prefer to carry one device that provides access to all of their services.
  • Small form-factor devices will require that software be more intelligent to minimize interactions with small screens and keyboards.
  • Business processes will be streamlined to minimize required user interaction
  • Security will be paramount as malware and trojan vendors shift their focus to the larger attack surface provided mobile devices.
  • Management of the mobile devices including auto wipe, tracking, disaster recovery will become more important as businesses rely more on mobile devices. This is extremely important in an enterprise environment where users often carry sensitive data on their devices.
  • Collaboration between mobile devices will become more important as more workers rely on mobile devices as their primary communications device. This includes instant messaging, video chat, and meeting collaboration services. These services must be secure.
  • Integration with social media sites will become more important as workers become more and more connected with other mobile workers.
  • Mobile workers will likely elect to carry one (and at the most two) mobile devices to do their tasks. Notebook systems are still too large and heavy. Netbooks are better but are still too large to be practical.
  • Despite advances in battery technology, battery life will continue to be an issue. Devices will need to make more efficient use of battery power using intelligent transfer, replication, and caching.
  • Customers and users will depend on the ability to access their data securely and privately from any location. The data should be available across multiple device types, connectivity, and form factors.
  • Mobile devices must support accessibility to enable their use by the disabled as well as to comply with government regulations.
  • Accurate location information will be important to provide location-based services. All devices will provide location information accurate to within a few feet.
  • A global workforce will require all mobile devices to have built-in translation for written and spoken language.

Healthcare

  • Healthcare costs will continue to increase and further strain funding sources such as Medicare. Mobile devices and applications that collect clinical data, share treatment therapies, schedule procedures and order prescriptions will help reduce costs and minimize errors.
  • In the Third World and emerging states, mobile caregivers like Doctors without Borders will rely heavily on mobile wireless devices that leverage the wireless infrastructure. Copper connections will only exist in heavily populated areas because of the high cost of deploying those connections.
  • Mobile devices will be routinely used to gather patient data, perform diagnostic procedures, and share data and therapies with other physicians.
  • Mobile devices woven into the fabric of clothing will allow patient data to be monitored and shared with physicians. The physician will be able to interact with the device to change how and what is being monitored without physically seeing the patient.

Manufacturing

  • Business will use mobile devices as a way to place orders, create and execute contracts, and to provide customer support. Back-end processes will be streamlined to permit easy interaction with users to minimize physical interactions with the device.
  • Mobile devices can improve supply chain management by providing spot pricing and availability as well as ordering in commodity and futures markets.
  • Tracking and identification of products on the factory floor as well as work in progress, inventories, and location of items will be easy to locate with a handheld device.
  • Shop floor management will use mobile devices to provide real time status of production against requirements and deliverables.

Transportation

  • Transportation companies will continue to use mobile wireless devices as a way to track deliveries. Inexpensive wireless transceivers will be attached to packages to allow their position to be easily located. Airlines will never again lose a customer’s bag.

Finance

  • Customers and mobile users will rely on their mobile devices to perform trades, check stock prices, place orders, etc. Because of the nature of the data, security will continue to drive the adoption of mobile devices. The ability to remotely locate and/or wipe a device is absolutely imperative.

Retail

  • Retail will provide a substantial opportunity for wireless mobile devices. Security will continue to be important as customers will be able to order goods and services using their mobile device. Position information will be important to drive location-based services.
  • Collaboration among retail customers will help drive sales.
  • Collaboration among retailers may allow real time sharing of customer information, buying habits and trends, based on participation and privacy agreements.
  • Integration with social media will continue to drive mobile use among consumers. Applications such as Facebook and Pheed, will continue to provide peer-influenced purchase recommendations. Younger users will rely heavily on these recommendations to make a purchase.

Military

  • Military use of mobile devices will increase exponentially with increased bandwidth. The ability to provide real time displays of troop and vehicle movement as well as the ability to control unmanned devices at a low cost will substantially increase mobile usage.

Law Enforcement and Public Safety

  • While law enforcement vehicles are equipped with notebooks, these devices are not often available while out of the vehicle. Mobile devices will provide real time voice recognition, fingerprint and identity checks, and DNA type matching for suspects and individuals without the need to return to the vehicle.
  •  Mobile devices will provide real time video, audio, and crime scene information including location and type of evidence including matching of blood samples, DNA, and firearms information linked to NCIS and similar databases.
  • Wireless devices will allow users to locate public resources such as shelters, and provide the information based on current conditions such as weather and road conditions, with the ability to specify safest routes to those locations.

E.T. Phone Home

The SETI Institute was created by astronomers such as the late Carl Sagan, and other notable individuals such as William Hewlett and David Packard of Hewlett-Packard fame, Nathan Myhrvold and Paul Allen of Microsoft, and several others. Using data from the sources such as Arecibo, Hubble, and Spitzer, SETI has spearheaded the search for life in the universe, and well as the search for extrasolar planets. In the late 1990’s, the SETI@home project was established to utilize the computing power of millions PCs distributed throughout the world as an ad hoc supercomputer of sorts to help sift through the mounds of data received from optical and radio telescopes and from data gathered by the Hubble and Spitzer Space Telescopes. The method used by SETI and others to search for extraterrestrial life makes the broad assumption that other life forms would use the electromagnetic spectrum for communications. We assume that the laws of physics are constant in the observable universe, so other intelligent civilizations would have also invented or stumbled across the idea of using the electromagnetic spectrum for communications.

One hole in this strategy is that E.T. may not be using a phone or a radio to call home. Although we pat ourselves on the back for all of our scientific inventions, it could be that there’s an entirely different medium of communications staring us right in the face. In fact, it may be doing just that. It could be that photons can be designed to carry information at the speed of light, and may employ some unknown type of encoding or modulation to carry messages. E.T. might even use dark energy to communicate, or perhaps employ neutrinos. We have enough trouble just detecting neutrinos, but to a more advanced civilization, perhaps these elusive little particles are as common as table salt.

Another possible flaw in this strategy is that there may in fact be life out there, but perhaps it is microbial in nature and is unable to answer. We’ve certainly discovered many examples of extremophiles here on Earth, so it could be that space is teaming with these forms of life. We just don’t know what to look for. Fermi suggested that given the size and age of the universe that it is unlikely we’re alone, but more likely, we’re just not looking in the right place or using the correct methods. Of course, there are other theories. Some feel that life is just a transitory stage and that we will ultimately dry up and blow away like doggy doo-doo. Still others feel that we’re being kept in a zoo, perhaps contained so we don’t screw up the rest of the universe.

Perhaps the biggest joke of all is using the human race as a yardstick for defining ‘intelligent life’. It’s pretty arrogant to use ourselves as the gold standard for intelligence. We’re a race that over centuries has butchered our brothers and sisters to possess their wealth, gain power, or force our views of religion on each other. A list of the genocide, wars or major conflicts throughout the relatively short time we’ve inhabited our planted could fill volumes. In the last two hundred years alone we’ve managed to strip the Earth of most of its natural resources, polluted our oceans, rivers, and streams, and partially destroyed the critical elements of our atmosphere that shield us from harmful radiation and help maintain a climate conducive to life. We’ve already left enough nuclear waste to keep our descendants on their guard for the next 25,000 years.

In our search for E.T., we often state that we’re looking for ‘other intelligent life in the universe’. This presupposes that we’re in the same category as extraterrestrials because after all, they couldn’t be more intelligent than we are, right?

Steve Mastrianni

Where the heck is E.T.?

Much has been published about the search for “intelligent life”. We’ve been scanning the heavens for radio signals that might have been sent by another civilization or from another planet, although we haven’t found anything yet. We’ve also been broadcasting our own radio signals hoping that somoeone or something in the universe is also listening. We’ve actually been broadcasting our existence for some one hundred years now in the form of television and radio broadcasts. In January of 1903, Gugliemi Marconi transmitted a signal from Massachusetts to Great Britain, and in 1909, Marconi and Karl Braun were awarded the Nobel Prize in physics for their work in wireless radio transmissions. Radio and television broadcasts have been eminiating from Earth on a regular basis for decades, anbd the signals that have been created here are now on their way to other parts of the universe. Since those signals are relatively simple, it should be easy for an advanced alien civilization to receive and decode them. So why have we never received a reply? Where is everyone?

One of the reasons we haven’t heard from anyone is that the universe is a big place; a very, very big place, and it takes a long time for a radio signal to be sent or received. Let me try to put it in perspective.

Throughout the universe, the laws of physics appear to be uniform. Gravity, dark energy, electromagnetism, and light appear to work the same as they do here on Earth. Einstein’s cosmic speed limit of 186,000 miles per second applies not only to light but to radio waves sent from Earth as well as radio waves sent from other places in the universe. While the speed of light is fast, it pales in comparison to the immense distances encountered in the universe.

The Earth orbits our Sun at a distance of some 93 million miles. Several other planets also orbit the Sun, and together they comprise our solar system. Our solar system is located in one of the arms of a spiral galaxy we call the Milky Way. The Milky Way galaxy is a group of over 400 billion planets and stars. It is over 120 thousand light years wide. The Milky Way is so large that our solar system has only made 16 round trips around the center of the galaxy since the universe was formed some 13.7 billion years ago.

The galaxy closest to us where we might find life is M31 in the constellation Andromeda, a galaxy with over one trillion stars. Close is a relative term, however, since Andromeda is some two million light years from Earth. The light we see from Andromeda has taken over two million years to get here, so we’re seeing it today as it was two million years ago. A radio signal from Andromeda would have to have been sent over two million years ago to allow us to receive it today. It will take our radio signals two million years to reach Andromeda. Since we’ve only been broadcasting for about 100 years, it will be a long time before anyone there might receive them.

Suppose we wanted to travel to Andromeda. Using our current propulsion technology, a space shuttle that left Earth would take about 75 billion years to reach Andromeda. If we were able to travel at the speed of light (which is impossible given Special Relativity), it would still take 2.5 million years to get there. And as galaxies go, Andromeda is our next door neighbor.

In spite of these obstacles, we should not stop looking. It could be that other civilizations are much more advanced and may have other communications technologies that are superior to ours. Some of those civilizations could have been around much longer than the human race, and may have been searching for other life forms millions of years before the human race appeared on the Earth.

Another possibility is that they don’t want to disclose their existence for a variety of reasons.

There’s another possibility, however. I’m convinced that if an advanced civilization ever looked at us, they’d consider us a barbaric and backward society and not worth the effort. These potential visitors would only have to review our history of wars, genocide, and violence to determine that there’s no intelligent life down here. They’d probably decide to let natural selection takes its course.

Writing Autonomic Software

Computer users are just interested in getting their work done. Whether surfing the Web, editing photographs, preparing architectural drawings, or monitoring the weather, computer users expect the computer system and software to do what is expected. They don’t want to see cryptic error messages, warnings about a new critical update, or program execution errors that cause all their work in progress to be lost. While those in the software engineering profession are used to such behavior and tend to tolerate it, most users find it annoying and even intimidating. As engineers, haven’t done a good job in isolating the user from things they shouldn’t have to understand, and moreover, things that we can fix automatically.

When customers buy telephone sets and plug them into the wall jack, they pick up the receiver and expect to hear a dial tone if the line is active. They don’t need to know anything about the modulation techniques used, nor do they have to adjust any voltages or enter special codes, it just works. The personal computer is much more complex than an analog telephone, but the users are often one and the same. In spite of this, we continue to write software as if users were skilled engineers who understand what a General Protection Fault is, or what an illegal memory reference means. We fill applications with meaningless dialog and message boxes. We make it a point to trap every exception, and then present the user with this complex and meaningless information as if they should be able to understand what happened, and then leave them to infer what to do next. These messages and dialogs are interesting and informative to us as developers, but mean very little to the average user. Even a technically innocuous message can be intimidating, so the application or system should do its best to fix the problem without the user’s knowledge or intervention.

An autonomic application is an application that relieves users from the drudgery of dealing with such things as update notifications and error messages, and just lets them get their job done. For example, if a critical update is available, that update should be downloaded and installed automatically. If it is not a critical fix, it should not be marked as such. Of course, there are a few caveats.

First, developers and other similar users should have the ability to hold off even critical updates because they need to control their development environment. Second, there must be a rollback capability in the event that the critical update causes the system to work incorrectly. These items should not be buried in something called a Control Panel, and then Add/Remove Programs, but should be easily accessible with a one-button Help key. The user should be presented with a button next to a caption that says something like “My computer does not appear to be working correctly since I installed the critical update on04/10/2002at13:40. I’d like to restore the system to the way it was before I installed the critical update on04/10/2002at13:40.” If the critical update was a device driver, the user should be able to click a button next to the caption “My computer does not seem to be working correctly since I installed the sound card drivers on03/12/2002at17:50. I’d like to restore the sound card driver to the way it was before the upgrade I performed on03/12/2002at17:50.”

During the removal of software, the user should never be presented with dialog boxes asking if it is “OK” to remove certain DLLs that might not be in use or that might be shared with other applications. Most users are afraid to say yes or no, and may abandon the process, leaving the software in a corrupt state. An autonomic system should figure out which option makes the most sense and do it automatically with no operator intervention.

In the case of an error during program execution, the user should never be presented with something like “the application performed an illegal access to location 0x340003ab”. A better description would be “The program you were running, “[Program Name]”, encountered a programming error that caused the program to stop working. You did nothing wrong, nor did you lose any work, as work that you were doing was saved automatically. You can access this saved work in the Save folder. Check the software manufacturer’s Web site to see if there any updates for your software or answers to the problem you’re having.”

One of the side effects of improperly-written C++ programs is the memory leak. In this case, the program requests a block of memory but never releases it. Over time, the offending program consumes the available memory, causing the system to slow to a crawl or become unresponsive. Instead of fixing this problem, the user is presented with a message that says something like “system memory is low, you must shutdown running programs.” Here the user is supposed to know what “system memory” is, and what to do about the problem. No other useful information is usually given. Yet the system knows what program is causing the problem, and even how fast that program is using up resources. The application can be instrumented to determine what resources have been requested and when they are returned to the resource pool. Over time, the system can determine which application is likely causing the problem, and can automatically stop that program from being loaded the next time the system is restarted. The system can then report to the user that the program has a problem and has been using up valuable resources and not releasing them correctly. The user can then be directed to check the program manufacturer’s Web site for updates to that program or asked not use the program until the problem has been identified and corrected.

Another area that users find difficult is configuring their system with the correct drivers, protocols, and networking software to get connected to a network or the Internet. We expect users to understand such things as IP addresses, TCP/IP, NetBIOS, DNS, and WINS. Most users don’t know what these acronyms mean, let alone understand how to configure them. An autonomic application or connection wizard should not make the user traverse a set of dialogs, or be forced to enter dozens of meaningless parameters just to surf the Web. Configuration should be automatic, the user should not have to enter any parameters, nor should they be forced to understand the details of network communications.

The use of DHCP went a long way to remedy some of these problems, but this is still not an autonomic solution. In order to use DHCP, the system first has to have a valid IP connection. In the case of a wired LAN connection, this means that the network adapter card must be installed, the correct drivers must be loaded for the card, the network card must be working properly with no resource conflicts, the TCP/IP protocol must be loaded and associated with the adapter, and the cable from the network card must be plugged into an operational LAN. In a wireless situation, almost all of these conditions must be met, but instead of having to be plugged into a LAN, the computer must be able to dial a server and establish an authenticated connection. If the network supports DHCP, the system should be able to use the DHCP protocol to locate most of the settings.

While this scenario works well for a networked home or small business, it does not work well in an enterprise environment. In the corporate environment, users are often required to access the Internet or email by using a proxy server. The purpose of this proxy server is to route IP traffic around the firewall while maintaining security. Users must know the name of the local proxy server and they must configure their system to allow access outside the firewall. There is no industry-wide standard for naming proxy servers, and thus no mechanism similar to DHCP to allow the automatic configuration of the user’s system.

Because of the complexity and number of parameters required to configure a system for network access, autonomic connectivity is an important part of autonomic computing. When a user plugs the network cable into the wall jack, the system should work just like the earlier example of the telephone. The user should not have to enter any parameters, install any drivers or protocols, or enter the names of local servers. The user just needs to get work done, and should not be distracted with dialogs, messages, or connections that are never made. This is even more important to the ‘road warrior’, who is often seen trying to retrieve his or her email between connecting flights. These users don’t have time to configure their systems before the next important meeting.

Once connected, we can enhance the autonomic computing experience by providing server-based content, such as help, tips, regional alerts, regional configuration changes, access to local resources, printers, storage, and services. Using the server, the client system can determine its location based on information in, say, an LDAP server, measuring network hops or network traffic speed, or if multiple access points are available, using some type of triangulation. Once the location is known, the server can help the user get connected. The server could install a small agent on the client, and that agent could then query the server for connection information. Using the connection information, the agent can configure the user’s system, making sure all the correct software is installed. If software needs to be installed, the server can download it to the client and have it installed. Web services can and should be used to provide connectivity information to the client from specialized web sites.

In an enterprise environment, the system can contact other systems and peers. With the proper authentication, the peer system could provide the client system with the configuration parameters and software necessary to perform various tasks.

The most important aspect of this procedure is that it should be done completely automatically, and should require no user intervention. One of the factors limiting this type of dynamic configuration has been the inability to change network parameters without the requirement to reboot the system. Configuration of this type requires the ability to build a network stack dynamically, allowing parts of the software stack to come and go without the need to reboot. Windows XP shows great promise in this area by allowing many of the network parameters to be changed without rebooting. It provides the ability to roll back a driver, although this feature still requires a great deal of user intervention. Windows XP introduced side-by-side DLLs, allowing DLLs to be used for a particular application without replacing an existing DLL with the same name that is also used by other applications.

Autonomic software falls into the following categories, each of which can be implemented at some level using current hardware and software technologies. The current release of Windows XP addresses some of these issues, but goes only part-way in providing true autonomic behavior. The categories are:

Break/Fix

  • Proactive monitoring
  • Device instrumentation
  • Application instrumentation
  • On-Demand Wizards
  • Collaboration

Break/Fix is the category that handles errors in real time, as they occur. Break/Fix implementation needs to be done locally, on the system that experienced the fault. An example of Break/Fix would be what happens when a PCMCIA card or USB device is inserted. Windows detects the insertion, identifies the device, and checks to see if a driver is already loaded for the device. If the driver exists, the driver’s entry points are called to configure the resources for the new device. If the driver is not resident, the system prompts the user to insert a driver disk or allows the user to search the web for a suitable driver. If the driver installation files and binaries are located in a specific folder, that folder can be automatically searched. If found, the driver is installed, the configuration entry points called, and the device becomes available. The user should not be involved in this process unless a critical error is encountered. All error logging should be to a database of actions and results that is linked to the device and device driver. Telling the user that the system installed the device and that the device is now available may be somewhat self-gratuitous on the part of the system, and the user may not want to hear about it anyway.

Break/Fix requires instrumentation to detect, identify, and fix problems. Microsoft Windows provides instrumentation for system objects such as devices and applications through the Windows Management Instrumentation subsystem. Devices that are Windows Management Instrumentation Query Language (WQL) certified provide instrumentation methods to allow the device drivers to be queried or controlled via the WMI APIs. Applications can also be instrumented to provide better Break/Fix information. While the instrumentation of these objects does come with a slight performance penalty, the ability to fix problems automatically is worth taking the hit.

To fix a problem, the system may have to connect to a peer or server to retrieve new software or parameters to fix it. However, the fault may be due to a problem with connectivity. In this case, it is likely that the system cannot contact a server or peer system to get help, so the problem must be fixed locally. Once a connection is established, the autonomic services available to the client system can be extended with a server.

While Break/Fix solves problems when they occur or immediately after, proactive monitoring attempts to identify problems before they happen to avoid having to invoke the Break/Fix mechanism. Proactive monitoring can detect when certain network connections are slow or where the connection quality had degraded over time, and attempt to repair those connections. It can detect when system resources are being used up by certain programs, and attempt to fix the problem before the system grinds to a crawl. Proactive monitoring can detect when a disk drive is getting full, or when a user appears to be stuck figuring out how to do something.

In Microsoft Office, the Office Assistant attempts to guess what the user needs help with, and frequently offers to perform an operation automatically, such as adding bullets to a list. For example, when the user is observed entering text in Microsoft Word, the Rocky object wags its tail and sniffs the ground. If data is being entered by a fast typist, the Rocky object pants as if it was working hard. If the user stops entering data for an extended period of time, the Rocky object lies down and goes to sleep.

A simple example of proactive monitoring can be found in Microsoft Office. When the user performs a repetitive operation two or more times, the Rocky object will suggest a shortcut to performing the same operation if one exists. This simple form of proactive monitoring could be extended to periodically check the size of the Word file in memory against the available disk space to make sure there’s enough space available to save the document. It could monitor the mouse movement to determine if the user is having trouble locating a small object with the mouse, such as a single pixel in a CAD program. The system could automatically adjust the mouse positioning by introducing a filter or adjusting the mouse resolution to provide more accurate pointing. When the system observes that some time has elapsed since the user required this feature, it could automatically remove the filter or revert back to the original resolution. The user should never have to traverse a half-dozen windows and dialogs to finds the mouse settings, and then do it again while trying to remember what the last settings were.

The system could monitor network traffic, watching for bandwidth degradation or bottlenecks. For example, when the response time from a particular name server becomes a problem, the system could attempt to locate another name server with less traffic and route our requests through that name server. At a later time, the system would revert back to the original name server to see if things had improved.

Another benefit of proactive monitoring is that the system can become more user-friendly by adapting to the way the system is being used. User who perform operations a certain way could have their user interface “custom fit” to their habits. For example, some users open up an application by launching the application using a shortcut, then open a file for the application using the standard File->Open menu option. The user interface for the installed programs could be modified to always include a File-Open menu item whether it exists in the program or not. Other users may never start applications directly, but may invoke them indirectly by clicking on the file type registered for that program. In this case, the system could hide those options and not expose them in the application’s menu. While this is not a compelling feature, it’s easy to imagine other features which could be implemented based on the user’s work style.

Windows instruments devices using WMI. Device manufacturers that provide WQL compatible devices are required to supply device drivers that support WMI. Using WMI, Windows and applications can query the state of any managed device, and can perform operations on those devices such as disabling a device or changing a device parameter. WMI generates events which can be used to determine what action to take. A program registers for certain events or classes of events, and then acts upon those events or passes them on to the next event handler in the chain.

WMI is primarily a reactive Break/FIX mechanism in that it doesn’t provide any proactive monitoring of a device that might cause some other action to be taken. It will report errors or problems with a device but will not undertake any corrective action by itself. Almost all of the parts are there to extend WMI to be more proactive. While WMI is rich with features, it is based on the outdated COM architecture that has been superceded by the new Common Language Runtime (CLR) model. WMI is difficult to learn, and requires a great deal of time to master. It requires programmers to learn yet another underlying technology which should be part of the system infrastructure and exposed through standard frameworks. This instrumentation should be represented with design patterns and as provide as standard application development templates as part of the development tools.

Generating events is only part of the story, however, because unless there’s a mechanism in place to analyze and correlate those events, the event information is not very useful. What’s missing in WMI is a rules-based event monitor that can determine if something if something will likely fail based on the sequence and proximity of certain events, and what can or should be done about it. The ability to predict a problem or failure before it occurs can save users valuable time and will help the lower the cost of ownership.

Like devices, applications can also be instrumented. Application programmers can create custom application events that can be handled by WMI and passed on to custom event handlers. This requires extra time to be allocated for designing, coding, and testing the event generation and handling features of an application. This makes for a lengthier debugging process to make sure that all of the events are handled correctly. Instrumentation is something that should be designed in to an application at the beginning of the process rather than as an afterthought. Applications can get notified when a device goes away, or when a resource is freed up. Events are generated synchronously, so the application can continue running until an event occurs.

In current versions of Windows, WMI provides developers with methods to instrument an application, but that’s only part of the job. What’s needed is not only a way to instrument the application, but to coordinate and map those events into a set of actions that can potentially remedy the situation. The solution may be a local setting that can be fixed without contacting a server or peer. In this case, the client system would just fix the problem and continue working.

An On-Demand wizard is like a personal servant that is commanded by the user to perform a particular task. It is a user-initiated action that requires an immediate response. A proactive monitoring component may be unaware that the task even needs to be performed, or the task might be something that is not part of the normal operation of the component. An example of this is getting connected to the Internet from inside an enterprise. As previously described, this can be a daunting task. Using DHCP, the system can retrieve the address of a local name server, the local gateway, and obtain an IP address. This assumes, however, that the network adapter is properly configured, that the correct drivers are loaded, the correct protocol software loaded, and the configuration set to DHCP. If these options are not correct, the system won’t even be able to get an IP address. Because connectivity it not always possible, the drivers, settings, and protocols necessary for establishing connectivity must reside on the client machine.

Collaboration assumes some type of connectivity, either to a server or another peer. In the case of a server, the client can contact the server for information about a particular application or device driver. For example, if a program is started but then quickly fails or consistently fails during a particular operation, the client system can contact the server to find the latest level of the offending program, and to see if any patches are available. Microsoft Update performs part of this service by contacting the Microsoft Update server and comparing the client software levels with the levels on the server. If a newer version is found, it is either downloaded and installed or downloaded for installation at a later time. The program could contact a peer system to inquire if the peer is aware of a fix for the problem, or if perhaps the peer system has the fix available for download.

It is possible that the problem could be one that the event monitor does not recognize, or that it has no fix for. In this case, the client system can contact the server to see if it has a resolution to the problem. The client system could also contact a neighboring system on a peer basis to find out if perhaps the same problem had been encountered and if so, how the problem was fixed on the peer. For example, if the user plugged their system into a network and found that they could not access the Internet outside a corporate firewall, the user’s system could broadcast a help inquiry to neighboring systems asking “do you have Internet access?” If the answer is “yes”, the client could then ask the peer for its network settings and configuration and use that information to update the client’s own network settings.

While we can do a fair job of making systems and software more autonomic using current technologies, we could do a much better job with the addition of some extra features. We will classify these features using the same categories discussed earlier.

Break/Fix

  • Proactive monitoring
  • Device instrumentation
  • Application instrumentation
  • On-Demand Wizards
  • Collaboration

Instead of requiring applications or drivers to monitor the health and state of the system, the operating system should take a more proactive role in fixing things that it has control over. Memory exceptions, illegal instructions, and other system failures should be handled by the operating system with no user intervention. The operating system should handle those faults and write them in a fault log for later viewing. Users should not be presented with meaningless dialogs or message boxes that the user can’t do anything about. The level at which these errors are reported should be configurable, and allow the use to decide when they want to be notified.

Break/Fix behaviors should be pluggable strategies. This would allow a system to be optimized for use in a battery-operated environment, or perhaps optimized for use in a highly secure environment. Strategies could be customized for a particular environment or enterprise without requiring changes to the engine that implements those strategies.

Some strategies should be autonomic in that the strategy should evolve over time based on the use of the system. The user’s habits and workload would be monitored and used as input to the strategy engine. The strategy engine could then perform a dynamic strategy update based on the way the system is used. Power consumption is an area where usage habits could result in a power savings by turning off components that are not likely to be used.

Future proactive management should include a class of non-critical events that are sent to subscribers to those events. Applications should have the ability to register for these events by class and use them to dynamically modify the behavior of the application. These modifications might include the ability to dynamically change the application’s user interface as resources come and go, or to modify the look and feel of the application based on the type of user. The application may also need to set certain system parameters on a per-session basis, and have those parameters in effect only for that session. Instead of parsing the event log to determine what requires attention, the system should generate events asynchronously and allow the application to take action or just ignore the event. For example, the application might request to be notified when the disk drive reaches 60 percent of its capacity, or if network bandwidth falls below 1 Mb/s. Each application should have the ability to set its own threshold at which the event will be triggered as each application has its own unique requirements.

The ability of an application to modify its behavior based on a series of events or pluggable strategies does not currently exist in today’s operating systems. Adding these types of features would make the system autonomic, and provide a platform for autonomic applications to run. However, it is important that these features be implemented in frameworks and run time components so as not to complicate the design of autonomic software and applications.

For example, an application might vary the rate at which it sends serial data based on some external factors. The programmer should be able to write an application that sends the serial data without regard to the rate by simply calling the method or function that sends the serial data. The code that implements the API should be autonomic, and could change the rate of communications based on a set of events or a particular strategy. Encapsulating the autonomic functions in the data transport would allow the application to be written without requiring the program to learn new technologies and interfaces.

Future device instrumentation should include performance metrics for each device. For example, it should be possible for applications to retrieve the average transfer rate, the ratio of seek time to read time, and the average read latency of a disk drive. The ability to instrument these devices should be configurable as this type of instrumentation will add overhead to system operation. This instrumentation should not be something that the programmer needs to explicitly invoke; rather it should be built into the current application run time and easily available to the application. It should not require the user to learn complex new technologies such as COM, but should provide a common set of frameworks that expose the information in a way that seamless to the application developer.

Programmers writing software in Visual Basic, for example, should not have to learn COM, but should have the information exported in a way that makes the information easily available to a VB application. Likewise, developers who write in C or C++ should also not be required to learn new technologies, but should be provided with a seamless set of functions and methods to provide access to the instrumentation subsystem. For example, a programmer writing an application in the C language should not have to learn the C++ language, COM programming, and OLE data types just to get access to the instrumentation subsystem.

Applications should have the ability to query data about their own performance so they can adjust their own parameters or operating mode dynamically. When an application is launched by the operating system’s program loader, the loader should have an option to create an instrumentation object which is linked with the application for the life of the application. Using this object pointer, the application should be able to easily access its own instrumentation information with simple method calls. The programmer should not have to initialize or use complicated subsystems to get at this information. Having the information handy and easily accessible will encourage developers to use the data to make their applications autonomic. For example, the Windows operating system is currently instrumented with the Windows Management Interface, or WMI. While rich in features, WMI requires a fairly in-depth knowledge of Common Object Model, or COM, as well as a good understanding of COM or OLE data types.

As operating systems become more complex, the need for on-demand wizards will increase. Users will elect to bypass menus and property settings in favor of quick shortcuts for certain operations. For casual users and frequent travelers, the on-demand wizard offers an easy way to perform complicated tasks with little or no knowledge of the underlying technology. An example would be getting connected to the internet from a new or unfamiliar location.

Large enterprises use various types of hardware and software to provide security for their internal networks including firewalls, Virtual Private Networking (VPN) connections, and various types of encryption. Users inside the firewall need to access the internet, but this can be a potential security risk so access is granted to the Internet by some type of proxy server. When the user attempts to view a Web page, the HTTP request is sent to the proxy server, which in turn performs the request and sends the data back to the requesting system. The name of the socks server often varies by location, so a user traveling from one location to another might not be able to connect to the Internet from inside the firewall without changing the name of the socks server in the user’s configuration. A simple wizard or button labeled “Connect Me Now!” should be implemented to automatically connect a user’s system to the Internet without requiring the user to know the name of the local socks server or have knowledge of how to change that parameter in the client system.

When a Windows XP user double-clicks on a folder that contains pictures, they are presented with options about how to display the contents of the folder. They can select if they want the contents of that folder displayed in a traditional file view, or as thumbnails. In the thumbnail view, the user can browse the thumbnails, then view, edit or print them. There is also an option to save the picture or pictures to disk. However, most users just transfer their pictures from a camera and save them to an album, not a file. Yet the XP wizard has no option to add the photo or photos to the user’s photo album.

The other thing that users like to do is to send pictures to their friends and relatives. A wizard should allow the user to easily send a picture or pictures to someone without knowing what type of file it is or where it is located. For example, when the user checks the properties of the picture, one the options should be to send it. The wizard then would pop up a list of recipients from the user’s address book and allow the user to enter a short note and click the “send” button. If the system does not have an active network connection, system should queue it up to be sent later when a connection exists.

Within a large enterprise, a great deal of specialized knowledge exists but is often not shared because no one knows specifically where to look for it, or perhaps isn’t even aware it exists. In a large company, it is not uncommon for a project team to be unaware of a similar project underway in the same company. Once together, however, they can discuss common problems and solutions, and perhaps share some technology and knowledge with each other.

Like the project team, systems within an enterprise also have specialized information that can be shared with peers to help solve problems or provide assistance in location resources. For example, suppose a client could access the corporate intranet but not the Internet because it did not have the name or IP address of the local SOCKS server. It could ask one or more of its peers if they have Internet access and if so, what they are using for their connectivity parameters. The system could then configure itself and get connected with no operator intervention required.

Another example might be the sharing of bookmarks among peers. Bookmarks usually contain a great deal of site-specific information, such as URLs for computer repair, telephone support, shipping, receiving, and purchasing. These bookmarks can be categorized, aggregated and shared among willing peers.

Writing autonomic software requires the programmer to take a close look at the functional components of their code from a non-technical perspective. It’s easy for us to detect errors and report them, leaving the resolution up to the user. We do this because it’s what we understand, and because we also understand how to resolve the problem. It’s also easier to write code this way because it requires the least amount of thought. Writing autonomic software requires that we not only understand the problem, but that we understand enough about the solution that we can correct it ourselves without requiring the user to fix it.

The operating system should provide a set of functions to make developing autonomic software easier. There should be standard access to system instrumentation information, and a set of tools and frameworks to provide autonomic behavior without forcing programmers to learn new technologies, languages, and architectures. Even without these tools, we can begin writing autonomic applications right now. We can even go back and modify current applications to be more autonomic, just by looking at them from a functional point of view.

Look at every place you request operator input to see if that input could be filled in automatically. Try and reduce the amount of operator interaction. Pay careful attention to menus, dialogs, and fields to make sure they really make sense. Have the program evaluated by several users that have never seen it in operation. Have them install it, configure it, and use it without a manual. Examine carefully every error message to see if it makes sense, if it is really necessary, or if it could be replaced with software that could fix the problem for the user. If an error message is required, make sure it makes sense, and that it’s written in terms that the user can understand. Be kind, as users are often intimidated by errors that make them feel that they’ve somehow done something wrong.

Be proactive. For example, if your software saves large amounts of data to disk, check the disk space when the program is started, before the user tries to save several hours of work and can’t because there’s not enough space left. If your software uses the serial port, check to see that the port is available before they try to send or receive data and get a timeout error. If your software will be issuing a remote procedure call to another server, make sure that connection exists before making the call and hanging the system.

Writing autonomic software requires extra effort and thought, but users will find it a more pleasurable experience to use. You don’t have to wait for the operating system to provide all of these features. With a little extra work, you can begin incorporating autonomic features in your code today.

–Steve

A neophyte’s view of cosmic inflation

 In 2001, physicists Justin Khoury, Burt Ovrut, Paul Steinhardt, and Neil Turok published “Ekpyrotic universe: colliding branes and the origin of the hot big bang”. In that paper, the authors attempt to explain the origin of the universe, not as the result of the proverbial “big bang”, but rather as the collision of two membranes or “branes”, three-dimensional worlds that exist in a hidden dimension. The term ekpyrotic comes from the ancient Greeks who used the word to describe the creation of the “world in fire”. Steinhardt et al used the word to describe their theory of how they believe the universe was created .

The ekpyrotic universe is an alternative theory to the big bang theory of the creation and expansion of the universe. While the widely accepted “big bang” theory assumes a center or singularity, a starting point of high density and high temperature where the universe began, the ekpyrotic universe is believed to have been created as a result of quantum effects caused by the collision of two three dimensional worlds. (Steinhardt). The underlying concept of the ekpyrotic theory is rooted in quantum mechanics, the interaction between subatomic particles which caused photons to be released. The big bang is based largely on Newtonian physics that are not applicable to the ekpyrotic theory.

Not everyone has bought into the ekpyrotic theory, however. According to Brian Greene, the colliding branes would have to be parallel with each other with an accuracy of better than 10-60 on a scale of 1030 times greater than the distance between the branes (Kallosh and Linde).

The so-called big bang was not a bang at all; in fact, it was probably no more than a whimper. It is believed to have taken place in a single space, an area of infinite density and extreme heat. In this state, photons interacted in a reaction that caused the universe to be created. The time it took to create the universe was very small, referred to as the Planck time, 10-43 seconds. Pairs of photons collided to form particle pairs such as electrons and positrons, a process called pair production. According to this theory, the universe began to immediately expand and has continued its expansion over time. The current observable distance to the edge of the universe is approximately 46.5 billion light years (Harrison, 2000). As the universe continued to expand, the density of the universe decreased, along with the temperature. In accordance with Wien’s law, the decrease in temperature should lead to longer wavelengths or redshifts which is exactly what Edward Hubble observed in the late 1920’s, and his observations helped bolster the big bang theory. The theory of the creation and expansion of the universe based on the cosmic singularity has been widely accepted and still remains the prevailing theory among most scientists. Scientific studies and measurements seem to corroborate the big bang theory.

Hubble analyzed the redshift observed from remote galaxies and determined that there was a relationship between the amount of redshift and the distance to those galaxies. He concluded that the galaxies were moving away at a rate of approximately 73 kilometers per second per Megaparsec (Mpc), which is referred to as the Hubble Constant, H0.The reciprocal of this constant can be used to calculate the approximate age of the universe. The result, about 13 billion years, seems to agree with the age of the Earth and solar system as calculated using the isotropic composition of lead in the universe, a result of the decay of Uranium.

In the mid 1960’s, scientists discovered that the universe contained background radiation, referred to as the cosmic microwave background radiation, or CMBR. The wavelengths of the CMBR indicate distinct patterns of energy indicative of photons, which have been propagating over billions of years. The temperature of the CMBR also supports the big bang theory. The universe contains more helium than what could have been generated by stars, so scientists have concluded that the large amount of helium must have been produced as a result of very large thermonuclear reactions, so the universe must have been very hot. The temperature of the CMBR today reflects the calculated expected drop in temperature using Wien’s law, which corroborates not only the theory of the big bang, but also the approximate age of the universe. Certain temperature signatures in the CMBR are also consistent with the type of thermal variations that would be generated by quantum fluctuations of matter that existed in a small space. The fluctuations, which take place at the subatomic level, are more aptly described using quantum mechanics.

One of the problems with the big bang theory is that it doesn’t quite fit the outcome we observe today. It does not provide an explanation of why the universe is essentially flat, nor does it provide mechanisms for the creation of stars and galaxies. The big bang theory did not initially explain why the background radiation is isotropic, which tends to indicate that the universe did not begin with the cosmic singularity. In an attempt to resolve some of this discrepancy, scientists have modified the big bang theory to include a period of very rapid expansion where the universe expanded by a factor of a million trillion trillion times in less than a millionth of a trillionth of a trillionth of a second (Greene). This modification to the big bang theory provided an explanation of the uniformity of the CMBR.

While it is the best theory that scientists have created, some missing pieces cannot be explained away using the big bang model. Supporters of the big bang are constantly tweaking their theories in an attempt to reconcile some of these inconsistencies to fit the big bang model.

Burt Ovrut, Paul Steinhardt, and Neil Turok originally presented the ekpyrotic theory at a meeting of the Space Telescope Science Institute in 2001. The authors posited that the universe began not in a state of infinite temperature and density as described by the big bang theory, but in a cold, vacuous state. From this state, the hot universe we know of was born. Expansion continued in the way we understand. The major difference between the ekpyrotic universe and the universe generated by the big bang is in how the universe actually began. According to the ekpyrotic theory, the universe began as a collision between two adjacent branes. The result was the release of energy in the form of quarks, electrons, and photons. The collision happens everywhere at the same time, so there is no one point of cosmic singularity. The result is a homogeneous universe that has a uniform density and temperature. During the collision, ripples along the flat geometric surfaces generate fluctuations in the microwave background, which are believed to stimulate the formation of galaxies (Steinhardt).

The fundamental concepts of the ekpyrotic theory are rooted in M-theory, a theory that describes the movement or vibration of one-dimensional strings in a multidimensional space. The ekpyrotic theory is based on unproven ideas in String theory that include an 11-dimensional space, while the big bang inflationary model is based on the well-understood and accepted Quantum Field theory. Despite wide acceptance and support, String theory has not been proven. It has been suggested by the world’s leading most prominent physicist Edward Whitten that String theory may require a new mathematical language all its own to describe it.

In the ekpyrotic universe, we would expect to find that the CMBR is isotropic and uniform, the same regardless of position or location. We would also expect to find no gravitational wave effects in the CMBR, nor would we expect to find strong magnetic poles, as the lower temperature of the ekpyrotic universe would likely prevent them from being created. The existence of very massive magnetic monopoles is a necessary consequence of most unified theories of the strong, electromagnetic, and weak interactions (Longo). These massive monopoles which consist of magnetically charged particles would be present in a universe created by the big bang, but would be absent in a universe created at a lower temperature as massive particles would not be released.

While the big bang theory has undergone many years of scrutiny, the concept of the ekpyrotic universe is still relatively new. Although quantum field theory is well understood, the quantum effects that are thought to have generated the ekpyrotic universe have yet to be proven or recreated. Superstring theory is still just a theory, although it is beginning to gain traction. However, only a relatively few years ago we thought the atom was the basic building block of the universe. Both the big bang and ekpyrotic theories are likely to be debated for many years to come as scientists work to unravel the basic building blocks of the universe. While we don’t know which, if any of the theories are correct, they have given us not only a better understanding of our world, but the motivation to continually push the envelope in an attempt to understand the origin of the universe.

Greene, Brian. The Fabric of the Cosmos. New York: Random House, 2004.

Harrison, E.R. Cosmology. Cambridge: Cambridge Universtiy Press, 2000.

Kallosh, Renata and Andrei Linde. “Pyrotechnic Universe.” High Energy Physics (2001): 35.

Longo, Michael. “Massive magnetic monopoles: Indirect and direct limits on their number density and flux.” Physical Review D (1982).

Steinhardt, Paul. A Brief Introduction to the Ekpyrotic Unverse. n.d. 27 November 2010 www.princeton.edu/~steinh/npr/.