Wireless Power Transmission

Wireless power transmission has been a dream since the days when Nikola Tesla imagined a world studded with enormous Tesla coils. But aside from advances in recharging electric toothbrushes, wireless power has so far failed to make significant inroads into consumer-level gear.
What is it?
This summer, Intel researchers demonstrated a method–based on MIT research–for throwing electricity a distance of a few feet, without wires and without any dangers to bystanders (well, none that they know about yet). Intel calls the technology a “wireless resonant energy link,” and it works by sending a specific, 10-MHz signal through a coil of wire; a similar, nearby coil of wire resonates in tune with the frequency, causing electrons to flow through that coil too. Though the design is primitive, it can light up a 60-watt bulb with 70 percent efficiency.

When is it coming?

Numerous obstacles remain, the first of which is that the Intel project uses alternating current. To charge gadgets, we’d have to see a direct-current version, and the size of the apparatus would have to be considerably smaller. Numerous regulatory hurdles would likely have to be cleared in commercializing such a system, and it would have to be thoroughly vetted for safety concerns. Assuming those all go reasonably well, such receiving circuitry could be integrated into the back of your laptop screen in roughly the next six to eight years. It would then be a simple matter for your local airport or even Starbucks to embed the companion power transmitters right into the walls so you can get a quick charge without ever opening up your laptop bag.

Memristor: A Groundbreaking New Circuit

This simple memristor circuit could soon transform all electronic devices.Since the dawn of electronics, we’ve had only three types of circuit components–resistors, inductors, and capacitors. But in 1971, UC Berkeley researcher Leon Chua theorized the possibility of a fourth type of component, one that would be able to measure the flow of electric current: the memristor. Now, just 37 years later, Hewlett-Packard has built one.
What is it?
As its name implies, the memristor can “remember” how much current has passed through it. And by alternating the amount of current that passes through it, a memristor can also become a one-element circuit component with unique properties. Most notably, it can save its electronic state even when the current is turned off, making it a great candidate to replace today’s flash memory. Memristors will theoretically be cheaper and far faster than flash memory, and allow far greater memory densities. They could also replace RAM chips as we know them, so that, after you turn off your computer, it will remember exactly what it was doing when you turn it back on, and return to work instantly. This lowering of cost and consolidating of components may lead to affordable, solid-state computers that fit in your pocket and run many times faster than today’s PCs. Someday the memristor could spawn a whole new type of computer, thanks to its ability to remember a range of electrical states rather than the simplistic “on” and “off” states that today’s digital processors recognize. By working with a dynamic range of data states in an analog mode, memristor-based computers could be capable of far more complex tasks than just shuttling ones and zeroes around.
When is it coming?
Researchers say that no real barrier prevents implementing the memristor in circuitry immediately. But it’s up to the business side to push products through to commercial reality. Memristors made to replace flash memory (at a lower cost and lower power consumption) will likely appear first; HP’s goal is to offer them by 2012. Beyond that, memristors will likely replace both DRAM and hard disks in the 2014-to-2016 time frame. As for memristor-based analog computers, that step may take 20-plus years.

Software performance testing

Software performance testing

In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort.

Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time. It is critical to the cost performance of a new system, that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope.

In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that production systems have a random nature of the workload and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability – except in the most simple system.

Loosely-coupled architectural implementations (e.g.: SOA) have created additional complexities with performance testing. Enterprise services or assets (that share common infrastructure or platform) require coordinated performance testing (with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms) to truly replicate production-like states. Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as “noise”) in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes.

Technology

Performance testing technology employs one or more PCs or Unix servers to act as injectors – each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load – starting with a small number of virtual users and increasing the number over a period to some maximum. The test result shows how the performance varies with the load, given as number of users vs response time. Various tools, are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, etc.

Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded –does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage?

Analytical Performance Modeling is a method to model the behaviour of an application in a spreadsheet. The model is fed with measurements of transaction resource demands (CPU, DIO, LAN, WAN), weighted by the transaction-mix (business transactions per hour). The weighted transaction resource demands are added-up to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource loads. Using the responsetime formula (R=S/(1-U), R=responsetime, S=servicetime, U=load), responsetimes can be calculated and calibrated with the results of the performance tests. Analytical performance modelling allows evaluation of design options and system sizing based on actual or anticipated business usage. It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms.

Performance specifications

It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See Performance Engineering for more details.

However, performance testing is frequently not performed against a specification, i.e. no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the “weakest link” – there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and report transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating (assuming a Windows system under test).

There is an apocryphal story of a company that spent a large amount optimizing their software without having performed a proper analysis of the problem. They ended up rewriting the system’s ‘idle loop’, where they had found the system spent most of its time, but even having the most efficient idle loop in the world obviously didn’t improve overall performance one iota!

Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be configured to introduce the lag what would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system’s user base will be accessing the system via a 56K modem connection and the other half over a T1, then the load injectors (computers that simulate real users) should either inject load over the same connections (ideal) or simulate the network latency of such connections, following the same user profile.

It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.

Performance specifications should ask the following questions, at a minimum:

  • In detail, what is the performance test scope? What subsystems, interfaces, components, etc are in and out of scope for this test?
  • For the user interfaces (UI’s) involved, how many concurrent users are expected for each (specify peak vs. nominal)?
  • What does the target system (hardware) look like (specify all server and network appliance configurations)?
  • What is the Application Workload Mix of each application component? (for example: 20% login, 40% search, 30% item select, 10% checkout).
  • What is the System Workload Mix? [Multiple workloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C)
  • What are the time requirements for any/all backend batch processes (specify peak vs. nominal)?

Tasks to undertake

Tasks to perform such a test would include:

  • Decide whether to use internal or external resources to perform the tests, depending on inhouse expertise (or lack thereof)
  • Gather or elicit performance requirements (specifications) from users and/or business analysts
  • Develop a high-level plan (or project charter), including requirements, resources, timelines and milestones
  • Develop a detailed performance test plan (including detailed scenarios and test cases, workloads, environment info, etc)
  • Choose test tool(s)
  • Specify test data needed and charter effort (often overlooked, but often the death of a valid performance test)
  • Develop proof-of-concept scripts for each application/component under test, using chosen test tools and strategies
  • Develop detailed performance test project plan, including all dependencies and associated timelines
  • Install and configure injectors/controller
  • Configure the test environment (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation, database test sets developed, etc.
  • Execute tests – probably repeatedly (iteratively) in order to see whether any unaccounted for factor might affect the results
  • Analyze the results – either pass/fail, or investigation of critical path and recommendation of corrective action

Methodology

patterns & practices Performance Testing Web Applications Methodology

According to the Microsoft Developer Network the patterns & practices Performance Testing Methodology consists of the following activities:

  • Activity 1. Identify the Test Environment. Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle.
  • Activity 2. Identify Performance Acceptance Criteria. Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.
  • Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
  • Activity 4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
  • Activity 5. Implement the Test Design. Develop the performance tests in accordance with the test design.
  • Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.
  • Activity 7. Analyze Results, Report, and Retest. Consolidate and share results data. Analyze the data both individually and as a cross-functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.

 

Software Engineer vs Programmer

Computer Software Engineers

Significant Points

  • Computer software engineers are projected to be one of the fastest growing occupations over the 2004-14 period.
  • Very good opportunities are expected for college graduates with at least a bachelors degree in computer engineering or computer science and with practical work experience.
  • Computer software engineers must continually strive to acquire new skills in conjunction with the rapid changes that are occurring in computer technology.

Nature of the Work

The explosive impact of computers and information technology on our everyday lives has generated a need to design and develop new computer software systems and to incorporate new technologies into a rapidly growing range of applications. The tasks performed by workers known as computer software engineers evolve quickly, reflecting new areas of specialization or changes in technology, as well as the preferences and practices of employers. Computer software engineers apply the principles and techniques of computer science, engineering, and mathematical analysis to the design, development, testing, and evaluation of the software and systems that enable computers to perform their many applications. (A separate statement on engineers appears elsewhere in the Handbook.)

Software engineers working in applications or systems development analyze users needs and design, construct, test, and maintain computer applications software or systems. Software engineers can be involved in the design and development of many types of software, including software for operating systems and network distribution, and compilers, which convert programs for execution on a computer. In programming, or coding, software engineers instruct a computer, line by line, how to perform a function. They also solve technical problems that arise. Software engineers must possess strong programming skills, but are more concerned with developing algorithms and analyzing and solving programming problems than with actually writing code. (A separate statement on computer programmers appears elsewhere in the Handbook.)

Computer applications software engineers analyze users needs and design, construct, and maintain general computer applications software or specialized utility programs. These workers use different programming languages, depending on the purpose of the program. The programming languages most often used are C, C++, and Java, with Fortran and COBOL used less commonly. Some software engineers develop both packaged systems and systems software or create customized applications.

Computer systems software engineers coordinate the construction and maintenance of a companys computer systems and plan their future growth. Working with the company, they coordinate each departments computer needs—ordering, inventory, billing, and payroll recordkeeping, for example—and make suggestions about its technical direction. They also might set up the companys intranets—networks that link computers within the organization and ease communication among the various departments.

Systems software engineers work for companies that configure, implement, and install complete computer systems. These workers may be members of the marketing or sales staff, serving as the primary technical resource for sales workers and customers. They also may be involved in product sales and in providing their customers with continuing technical support. Since the selling of complex computer systems often requires substantial customization for the purchasers organization, software engineers help to explain the requirements necessary for installing and operating the new system in the purchasers computing environment. In addition, systems software engineers are responsible for ensuring security across the systems they are configuring.

Computer software engineers often work as part of a team that designs new hardware, software, and systems. A core team may comprise engineering, marketing, manufacturing, and design people, who work together until the product is released.

Working Conditions

Computer software engineers normally work in well-lighted and comfortable offices or laboratories in which computer equipment is located. Most software engineers work at least 40 hours a week; however, due to the project-oriented nature of the work, they also may have to work evenings or weekends to meet deadlines or solve unexpected technical problems. Like other workers who sit for hours at a computer, typing on a keyboard, software engineers are susceptible to eyestrain, back discomfort, and hand and wrist problems such as carpal tunnel syndrome.

As they strive to improve software for users, many computer software engineers interact with customers and coworkers. Computer software engineers who are employed by software vendors and consulting firms, for example, spend much of their time away from their offices, frequently traveling overnight to meet with customers. They call on customers in businesses ranging from manufacturing plants to financial institutions.

As networks expand, software engineers may be able to use modems, laptops, e-mail, and the Internet to provide more technical support and other services from their main office, connecting to a customers computer remotely to identify and correct developing problems.

Training, Other Qualifications, and Advancement

Most employers prefer to hire persons who have at least a bachelors degree and broad knowledge of, and experience with, a variety of computer systems and technologies. The usual degree concentration for applications software engineers is computer science or software engineering; for systems software engineers, it is computer science or computer information systems. Graduate degrees are preferred for some of the more complex jobs.

Academic programs in software engineering emphasize software and may be offered as a degree option or in conjunction with computer science degrees. Increasing emphasis on computer security suggests that software engineers with advanced degrees that include mathematics and systems design will be sought after by software developers, government agencies, and consulting firms specializing in information assurance and security. Students seeking software engineering jobs enhance their employment opportunities by participating in internship or co-op programs offered through their schools. These experiences provide the students with broad knowledge and experience, making them more attractive candidates to employers. Inexperienced college graduates may be hired by large computer and consulting firms that train new employees in intensive, company-based programs. In many firms, new hires are mentored, and their mentors have an input into the performance evaluations of these new employees.

For systems software engineering jobs that require workers who have a college degree, a bachelors degree in computer science or computer information systems is typical. For systems engineering jobs that place less emphasis on workers having a computer-related degree, computer training programs leading to certification are offered by systems software vendors. Nonetheless, most training authorities feel that program certification alone is not sufficient for the majority of software engineering jobs.

Persons interested in jobs as computer software engineers must have strong problem-solving and analytical skills. They also must be able to communicate effectively with team members, other staff, and the customers they meet. Because they often deal with a number of tasks simultaneously, they must be able to concentrate and pay close attention to detail.

As is the case with most occupations, advancement opportunities for computer software engineers increase with experience. Entry-level computer software engineers are likely to test and verify ongoing designs. As they become more experienced, they may become involved in designing and developing software. Eventually, they may advance to become a project manager, manager of information systems, or chief information officer. Some computer software engineers with several years of experience or expertise find lucrative opportunities working as systems designers or independent consultants or starting their own computer consulting firms.

As technological advances in the computer field continue, employers demand new skills. Computer software engineers must continually strive to acquire such skills if they wish to remain in this extremely dynamic field. For example, computer software engineers interested in working for a bank should have some expertise in finance as they integrate new technologies into the computer system of the bank. To help them keep up with the changing technology, continuing education and professional development seminars are offered by employers, software vendors, colleges and universities, private training institutions, and professional computing societies.

 

Computer Programmers

Computer programmers write, test, and maintain the detailed instructions, called programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming—advanced computing technologies and sophisticated new languages and programming tools—have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization. In this occupational statement, computer programmers are individuals whose main job function is programming; this group has a wide range of responsibilities and educational backgrounds.

Computer programs tell the computer what to do—which information to identify and access, how to process it, and what equipment to use. Programs vary widely depending on the type of information to be accessed or generated. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Although simple programs can be written in a few hours, programs that use complex mathematical formulas whose solutions can only be approximated or that draw data from many existing systems may require more than a year of work. In most cases, several programmers work together as a team under a senior programmers supervision.

Many programmers update, repair, modify, and expand existing programs. When making changes to a section of code, called a routine, programmers need to make other users aware of the task that the routine is to perform. They do this by inserting comments in the coded instructions so that others can understand the program. Many programmers use computer-assisted software engineering (CASE) tools to automate much of the coding process. These tools enable a programmer to concentrate on writing the unique parts of the program, because the tools automate various pieces of the program being built. CASE tools generate whole sections of code automatically, rather than line by line. Programmers also use libraries of basic code that can be modified or customized for a specific application. This approach yields more reliable and consistent programs and increases programmers productivity by eliminating some routine steps.

Programmers test a program by running it to ensure that the instructions are correct and that the program produces the desired outcome. If errors do occur, the programmer must make the appropriate change and recheck the program until it produces the correct results. This process is called testing and debugging. Programmers may continue to fix these problems throughout the life of a program. Programmers working in a mainframe environment, which involves a large centralized computer, may prepare instructions for a computer operator who will run the program. (A separate statement on computer operators appears elsewhere in the Handbook.) Programmers also may contribute to a manual for persons who will be using the program.

Computer programmers often are grouped into two broad types—applications programmers and systems programmers. Applications programmers write programs to handle a specific job, such as a program to track inventory within an organization. They also may revise existing packaged software or customize generic applications which are frequently purchased from vendors. Systems programmers, in contrast, write programs to maintain and control computer systems software, such as operating systems, networked systems, and database systems. These workers make changes in the instructions that determine how the network, workstations, and central processing unit of the system handle the various jobs they have been given and how they communicate with peripheral equipment such as terminals, printers, and disk drives. Because of their knowledge of the entire computer system, systems programmers often help applications programmers determine the source of problems that may occur with their programs.

Programmers in software development companies may work directly with experts from various fields to create software—either programs designed for specific clients or packaged software for general use—ranging from games and educational software to programs for desktop publishing and financial planning. Programming of packaged software constitutes one of the most rapidly growing segments of the computer services industry.

In some organizations, particularly small ones, workers commonly known as programmer-analysts are responsible for both the systems analysis and the actual programming work. (A more detailed description of the work of programmer-analysts is presented in the statement on computer systems analysts elsewhere in the Handbook.) Advanced programming languages and new object-oriented programming capabilities are increasing the efficiency and productivity of both programmers and users. The transition from a mainframe environment to one that is based primarily on personal computers (PCs) has blurred the once rigid distinction between the programmer and the user. Increasingly, adept end users are taking over many of the tasks previously performed by programmers. For example, the growing use of packaged software, such as spreadsheet and database management software packages, allows users to write simple programs to access data and perform calculations.