Organization: Public

General Drawings


Drawings

Brief Description:

illustrates a simplified  system 100 in which a server 104 and a client device 106 are communicatively coupled via a network 102.

Detailed Description:

FIG. 1  illustrates a system 100 in which a server 104 and a client device 106 are connected to a network 102.

In various embodiments, the network 102 may include the Internet, a local area network (“LAN”), a wide area network (“WAN”), and/or other data network. In addition to traditional data-networking protocols, in some embodiments, data may be communicated according to protocols and/or standards including near field communication (“NFC”), Bluetooth, power-line communication (“PLC”), and the like. In some embodiments, the network 102 may also include a voice network that conveys not only voice communications, but also non-voice data such as Short Message Service (“SMS”) messages, as well as data communicated via various cellular data communication protocols, and the like.

In various embodiments, the client device 106 may include desktop PCs, mobile phones, laptops, tablets, wearable computers, or other computing devices that are capable of connecting to the network 102 and communicating with the server 104, such as described herein.

In various embodiments, additional infrastructure (e.g., short message service centers, cell sites, routers, gateways, firewalls, and the like), as well as additional devices may be present. Further, in some embodiments, the functions described as being provided by some or all of the server 104 and the client device 106 may be implemented via various combinations of physical and/or logical devices. However, it is not necessary to show such infrastructure and implementation details in FIG. 1 in order to describe an illustrative embodiment.

Brief Description:

is an example block diagram of a computing device 200 that may incorporate embodiments of the present invention.

Detailed Description:

FIG. 2  is an example block diagram of a computing device 200  that may incorporate embodiments of the present invention. FIG. 2 is merely illustrative of a machine system to carry out aspects of the technical processes described herein, and does not limit the scope of the claims. One of ordinary skill in the art would recognize other variations, modifications, and alternatives. In one embodiment, the computing device 200 typically includes a monitor or graphical user interface 202, a data processing system 220, a communication network interface 212input device(s) 208, output device(s) 206, and the like.

As depicted in FIG. 2, the data processing system 220 may include one or more processor(s) 204 that communicate with a number of peripheral devices via a bus subsystem 218. These peripheral devices may include input device(s) 208, output device(s) 206, communication network interface 212, and a storage subsystem, such as a volatile memory 210 and a nonvolatile memory 214.

The volatile memory 210 and/or the nonvolatile memory 214 may store computer-executable instructions and thus forming logic 222 that when applied to and executed by the processor(s) 204 implement embodiments of the processes disclosed herein.

The input device(s) 208 include devices and mechanisms for inputting information to the data processing system 220. These may include a keyboard, a keypad, a touch screen incorporated into the monitor or graphical user interface 202, audio input devices such as voice recognition systems, microphones, and other types of input devices. In various embodiments, the input device(s) 208 may be embodied as a computer mouse, a trackball, a track pad, a joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like. The input device(s) 208 typically allow a user to select objects, icons, control areas, text and the like that appear on the monitor or graphical user interface 202 via a command such as a click of a button or the like.

The output device(s) 206 include devices and mechanisms for outputting information from the data processing system 220. These may include the monitor or graphical user interface 202, speakers, printers, infrared LEDs, and so on as well understood in the art.

The communication network interface 212 provides an interface to communication networks (e.g., communication network 216) and devices external to the data processing system 220. The communication network interface 212 may serve as an interface for receiving data from and transmitting data to other systems. Embodiments of the communication network interface 212 may include an Ethernet interface, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL), FireWire, USB, a wireless communication interface such as BlueTooth or WiFi, a near field communication wireless interface, a cellular interface, and the like.

The communication network interface 212 may be coupled to the communication network 216 via an antenna, a cable, or the like. In some embodiments, the communication network interface 212 may be physically integrated on a circuit board of the data processing system 220, or in some cases may be implemented in software or firmware, such as “soft modems”, or the like.

The computing device 200 may include logic that enables communications over a network using protocols such as HTTP, TCP/IP, RTP/RTSP, IPX, UDP and the like. 

The volatile memory 210 and the nonvolatile memory 214 are examples of tangible media configured to store computer readable data and instructions to implement various embodiments of the processes described herein. Other types of tangible media include removable memory (e.g., pluggable USB memory devices, mobile device SIM cards), optical storage media such as CD-ROMS, DVDs, semiconductor memories such as flash memories, non-transitory read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like. The volatile memory 210 and the nonvolatile memory 214 may be configured to store the basic programming and data constructs that provide the functionality of the disclosed processes and other embodiments thereof that fall within the scope of the present invention.

Logic 222 that implements embodiments of the present invention may be stored in the volatile memory 210 and/or the nonvolatile memory 214. Said logic 222 may be read from the volatile memory 210 and/or nonvolatile memory 214 and executed by the processor(s) 204. The volatile memory 210 and the nonvolatile memory 214 may also provide a repository for storing data used by the logic 222.

The volatile memory 210 and the nonvolatile memory 214 may include a number of memories including a main random access memory (RAM) for storage of instructions and data during program execution and a read only memory (ROM) in which read-only non-transitory instructions are stored. The volatile memory 210 and the nonvolatile memory 214 may include a file storage subsystem providing persistent (non-volatile) storage for program and data files. The volatile memory 210 and the nonvolatile memory 214 may include removable storage systems, such as removable flash memory.

The bus subsystem 218 provides a mechanism for enabling the various components and subsystems of data processing system 220 communicate with each other as intended. Although the communication network interface 212 is depicted schematically as a single bus, some embodiments of the bus subsystem 218 may utilize multiple distinct busses.

It will be readily apparent to one of ordinary skill in the art that the computing device 200 may be a device such as a smartphone, a desktop computer, a laptop computer, a rack-mounted computer system, a computer server, or a tablet computer device. As commonly known in the art, the computing device 200 may be implemented as a collection of multiple networked computing devices. Further, the computing device 200 will typically include operating system logic (not illustrated) the types and nature of which are well known in the art. 

Terms used herein should be accorded their ordinary meaning in the relevant arts, or the meaning indicated by their use in context, but if an express definition is provided, that meaning controls.

“Circuitry” in this context refers to electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).

“Firmware” in this context refers to software logic embodied as processor-executable instructions stored in read-only memories or media.

“Hardware” in this context refers to logic embodied as analog or digital circuitry.

“Logic” in this context refers to machine memory circuits, non transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).

“Software” in this context refers to logic implemented as processor-executable instructions in a machine memory (e.g. read/write volatile or nonvolatile memory or media).

Herein, references to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. Any terms not expressly defined herein have their conventional meaning as commonly understood by those having skill in the relevant art(s).

Various logic functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on.

Brief Description:

illustrates a system 300 in accordance with one embodiment.

Detailed Description:

FIG. 3 illustrates several components of an exemplary system 300 in accordance with one embodiment. In various embodiments, system 300 may include a desktop PC, server, workstation, mobile phone, laptop, tablet, set-top box, appliance, or other computing device that is capable of performing operations such as those described herein. In some embodiments, system 300 may include many more components than those shown in FIG. 3. However, it is not necessary that all of these generally conventional components be shown in order to disclose an illustrative embodiment.  Collectively, the various tangible components or a subset of the tangible components may be referred to herein as “logic” configured or adapted in a particular way, for example as logic configured or adapted with particular software or firmware.

In various embodiments, system 300 may comprise one or more physical and/or logical devices that collectively provide the functionalities described herein. In some embodiments, system 300 may comprise one or more replicated and/or distributed physical or logical devices.

In some embodiments, system 300 may comprise one or more computing resources provisioned from a “cloud computing” provider, for example, Amazon Elastic Compute Cloud (“Amazon EC2”), provided by Amazon.com, Inc. of Seattle, Washington; Sun Cloud Compute Utility, provided by Sun Microsystems, Inc. of Santa Clara, California; Windows Azure, provided by Microsoft Corporation of Redmond, Washington, and the like. 

System 300 includes a bus 302 interconnecting several components including a network interface 308, a display 306, a central processing unit 310, and a memory 304.

Memory 304 generally comprises a random access memory (“RAM”) and permanent non-transitory mass storage device, such as a hard disk drive or solid-state drive. Memory 304 stores an operating system 312.

These and other software components may be loaded into memory 304 of system 300 using a drive mechanism (not shown) associated with a non-transitory computer-readable  medium 316, such as a DVD/CD-ROM drive, memory card, network download, or the like.

Memory 304 also includes database 314. In some embodiments, system 300 may communicate with database 314 via network interface 308, a storage area network (“SAN”), a high-speed serial bus, and/or via the other suitable communication technology.

In some embodiments, database 314 may comprise one or more storage resources provisioned from a “cloud storage” provider, for example, Amazon Simple Storage Service (“Amazon S3”), provided by Amazon.com, Inc. of Seattle, Washington, Google Cloud Storage, provided by Google, Inc. of Mountain View, California, and the like.

Terms used herein should be accorded their ordinary meaning in the relevant arts, or the meaning indicated by their use in context, but if an express definition is provided, that meaning controls.

“Circuitry” in this context refers to electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).

“Firmware” in this context refers to software logic embodied as processor-executable instructions stored in read-only memories or media.

“Hardware” in this context refers to logic embodied as analog or digital circuitry.

“Logic” in this context refers to machine memory circuits, non transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).

“Software” in this context refers to logic implemented as processor-executable instructions in a machine memory (e.g. read/write volatile or nonvolatile memory or media).

Herein, references to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. Any terms not expressly defined herein have their conventional meaning as commonly understood by those having skill in the relevant art(s).

Various logic functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on.

Brief Description:

illustrates an embodiment of a digital apparatus 400 to implement components and process steps of the system described herein.

Detailed Description:

FIG. 4 illustrates an embodiment of a digital apparatus 400 to implement components and process steps of the system described herein.

Input devices 404 comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 404 are keyboards which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices 404 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 406

The memory 406 is typically what is known as a first or second level memory device, providing for storage (via configuration of matter or states of matter) of signals received from the input devices 404, instructions and information for controlling operation of the CPU 402, and signals from storage devices 410

The memory 406 and/or the storage devices 410 may store computer-executable instructions and thus forming logic 414 that when applied to and executed by the CPU 402 implement embodiments of the processes disclosed herein.

Information stored in the memory 406 is typically directly accessible to the CPU 402 of the device. Signals input to the device cause the reconfiguration of the internal material/energy state of the memory 406, creating in essence a new machine configuration, influencing the behavior of the digital apparatus 400 by affecting the behavior of the CPU 402 with control signals (instructions) and data provided in conjunction with the control signals. 

Second or third level storage devices 410 may provide a slower but higher capacity machine memory capability. Examples of storage devices 410 are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories. 

The CPU 402 may cause the configuration of the memory 406 to be altered by signals in storage devices 410. In other words, the CPU 402 may cause data and instructions to be read from storage devices 410 in the memory 406 from which may then influence the operations of CPU 402 as instructions and data signals, and from which it may also be provided to the output devices 408. The CPU 402 may alter the content of the memory 406 by signaling to a machine interface of memory 406 to alter the internal configuration, and then converted signals to the storage devices 410 to alter its material internal configuration. In other words, data and instructions may be backed up from memory 406, which is often volatile, to storage devices 410, which are often non-volatile.

Output devices 408 are transducers which convert signals received from the memory 406 into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers).  

The network interface 412 receives signals from the memory 406 and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 412 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 406.

Terms used herein should be accorded their ordinary meaning in the relevant arts, or the meaning indicated by their use in context, but if an express definition is provided, that meaning controls.

“Circuitry” in this context refers to electrical circuitry having at least one discrete electrical circuit, electrical circuitry having at least one integrated circuit, electrical circuitry having at least one application specific integrated circuit, circuitry forming a general purpose computing device configured by a computer program (e.g., a general purpose computer configured by a computer program which at least partially carries out processes or devices described herein, or a microprocessor configured by a computer program which at least partially carries out processes or devices described herein), circuitry forming a memory device (e.g., forms of random access memory), or circuitry forming a communications device (e.g., a modem, communications switch, or optical-electrical equipment).

“Firmware” in this context refers to software logic embodied as processor-executable instructions stored in read-only memories or media.

“Hardware” in this context refers to logic embodied as analog or digital circuitry.

“Logic” in this context refers to machine memory circuits, non transitory machine readable media, and/or circuitry which by way of its material and/or material-energy configuration comprises control and/or procedural signals, and/or settings and values (such as resistance, impedance, capacitance, inductance, current/voltage ratings, etc.), that may be applied to influence the operation of a device. Magnetic media, electronic circuits, electrical and optical memory (both volatile and nonvolatile), and firmware are examples of logic. Logic specifically excludes pure signals or software per se (however does not exclude machine memories comprising software and thereby forming configurations of matter).

“Software” in this context refers to logic implemented as processor-executable instructions in a machine memory (e.g. read/write volatile or nonvolatile memory or media).

Herein, references to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to a single one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list, unless expressly limited to one or the other. Any terms not expressly defined herein have their conventional meaning as commonly understood by those having skill in the relevant art(s).

Various logic functional operations described herein may be implemented in logic that is referred to using a noun or noun phrase reflecting said operation or function. For example, an association operation may be carried out by an “associator” or “correlator”. Likewise, switching may be carried out by a “switch”, selection by a “selector”, and so on.

Brief Description:

illustrates a mobile device 500 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 5, a mobile device 500 comprises an antenna 502, control logic 504wireless communication logic 506, a memory 508, a power manager 510, a battery 512, logic 516, and user interface logic 514.

The control logic 504 controls and coordinates the operation of other components as well as providing signal processing for the mobile device 500. For example control logic 504 may extract baseband signals from radio frequency signals received from the wireless communication logic 506 logic, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication logic 506 logic. Control logic 504 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication logic 506 may further comprise memory 508 which may be utilized by the control logic 504 to read and write instructions (commands) and data (operands for the instructions). The memory 508 may comprise logic 516 to carry out aspects of the processes disclosed herein, e.g., those aspects executed by a smart phone or other mobile device. 

A human user or operator of the mobile device 500 may utilize the user interface logic 514 to receive information from and input information to the mobile device 500. Images, video and other display information, for example, user interface optical patterns, may be output to the user interface logic 514, which may for example operate as a liquid crystal display or may utilize other optical output technology. The user interface logic 514 may also operate as a user input device, being touch sensitive where contact or close contact by a use’s finger or other device handled by the user may be detected by transducers. An area of contact or proximity to the user interface logic 514 may also be detected by transducers and this information may be supplied to the control logic 504 to affect the internal operation of the mobile device 500 and to influence control and operation of its various components. 

Audio signals may be provided to user interface logic 514 from which signals output to one and more speakers to create pressure waves in the external environment representing the audio. The mobile device 500 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The mobile device 500 may operate on power received from a battery 512. The battery 512 capability and energy supply may be managed by a power manager 510

The mobile device 500 may transmit wireless signals of various types and range (e.g., cellular, GPS, WiFi, BlueTooth, and near field communication i.e. NFC). The mobile device 500 may also receive these types of wireless signals. Wireless signals are  transmitted and received using wireless communication logic 506 logic coupled to one or more antenna 502. Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

Brief Description:

illustrates an arrays 600 in accordance with one embodiment.

Detailed Description:

FIG. 6 illustrates arrays 600.

Brief Description:

illustrates an electrical, logical, mathmatical symbols 700 in accordance with one embodiment.

Detailed Description:

FIG. 7 illustrates electrical, logical, mathmatical symbols 700.

Brief Description:

illustrates a mobile device 800 in accordance with one embodiment.

Detailed Description:

Signal processing and system control 804 controls and coordinates the operation of other components as well as providing signal processing for the mobile device 800. For example signal processing and system control 804 may extract baseband signals from radio frequency signals received from the wireless communication 806 logic, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication 806 logic. Signal processing and system control 804 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication 806 may further comprise memory 808 which may be utilized by the signal processing and system control 804 to read and write instructions (commands) and data (operands for the instructions). 

A human user or operator of the mobile device 800 may utilize the user interface 814 to receive information from and input information to the mobile device 800. Images, video and other display information, for example, user interface optical patterns, may be output to the user interface 814, which may for example operate as a liquid crystal display or may utilize other optical output technology. The user interface 814 may also operate as a user input device, being touch sensitive where contact or close contact by a use’s finger or other device handled by the user may be detected by transducers. An area of contact or proximity to the user interface 814 may also be detected by transducers and this information may be supplied to the signal processing and system control 804 to affect the internal operation of the mobile device 800 and to influence control and operation of its various components. 

A camera 816 may interface to image processing 818 logic to record images and video from the environment. The image processing 818 may operate to provide image/video enhancement, compression, and other transformations, and from there to the signal processing and system control 804 for further processing and storage to memory 808. Images and video stored in the memory 808 may also be read by the signal processing and system control 804 and output to the user interface 814 for display to a user of the mobile device 800.

Audio signals may be provided to user interface 814 from which signals output to one and more speakers to create pressure waves in the external environment representing the audio. The mobile device 800 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The mobile device 800 may operate on power received from a battery 812. The battery 812 capability and energy supply may be managed by a power manager 810

The mobile device 800 may transmit wireless signals of various types and range (e.g., cellular, WiFi, BlueTooth, and near field communication i.e. NFC). The mobile device 800 may also receive these types of wireless signals. Wireless signals are  transmitted and received using wireless communication 806 logic coupled to one or more antenna 802. Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

Brief Description:

illustrates a mobile device 900 in accordance with one embodiment.

Detailed Description:

Signal processing and system control 906 controls and coordinates the operation of other components as well as providing signal processing for the mobile device 900. For example signal processing and system control 906 may extract baseband signals from radio frequency signals received from the wireless communication 908 logic, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication 908 logic. Signal processing and system control 906 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication 908 may further comprise memory 916 which may be utilized by the signal processing and system control 906 to read and write instructions (commands) and data (operands for the instructions). 

A human user or operator of the mobile device 900 may utilize the user interface 922 to receive information from and input information to the mobile device 900. Images, video and other display information, for example, user interface optical patterns, may be output to the user interface 922, which may for example operate as a liquid crystal display or may utilize other optical output technology. The user interface 922 may also operate as a user input device, being touch sensitive where contact or close contact by a use’s finger or other device handled by the user may be detected by transducers. An area of contactor proximity to the user interface 922 may also be detected by transducers and this information may be supplied to the signal processing and system control 906 to affect the internal operation of the mobile device 900and to influence control and operation of its various components. 

A camera 924 may interface to image processing 926 logic to record images and video from the environment. The image processing 926 may operate to provide image/video enhancement, compression, and other transformations, and from there to the signal processing and system control 906 for further processing and storage to memory 916. Images and video stored in the memory 916 may also be read by the signal processing and system control 906 and output to the user interface 922 for display to a user of the mobile device 900.

Audio signals may be provided to user interface 922 from which signals output to one and more speakers to create pressure waves in the external environment representing the audio. The mobile device 900 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The mobile device 900 may operate on power received from a battery 920. The battery 920 capability and energy supply may be managed by a power manager 918

The mobile device 900 may transmit wireless signals of various types and range (e.g., cellular, WiFi, BlueTooth, and near field communication i.e. NFC). The mobile device 900 may also receive these types of wireless signals. Cellular wireless signals are  transmitted and received using wireless communication 908 logic coupled to one or more antenna 902. Shorter-range wireless signals may be transmitted and received via antenna 904 and wireless communication logic 928. Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

The device may utilize a haptic driver 932 which controls a vibration generator 914 to cause vibrations in response to events identified by signal processing and system control 906, such as the received text messages, emails, incoming calls or other events that require the user or the device’s attention.

A subscriber identity module ( SIM 910 ) may be present in some mobile devices, especially those operated on the Global System for Mobile Communication (GSM) network. The SIM 910 stores, in machine-readable memory, personal information of a mobile service subscriber, such as the subscriber’s cell phone number, address book, text messages, and other personal data. A user of the mobile device 900 can move the SIM 910to a different and maintain access to their personal information. A SIM 910 typically has a unique number which identifies the subscriber to the wireless network service provider.

The mobile device 900 may include an audio driver 930 including an audio encoder/decoder for encoding and decoding digital audio files or audio files stored by memory 916, SIM 910, or received in real time via  one of the antenna 902, antenna 904. The audio driver 930 is controlled by the signal processing and system control 906 and decoded audio is provided to one and more speaker 912 to create pressure waves in the external environment representing the audio.

Brief Description:

illustrates a mobile device 1000 in accordance with one embodiment.

Detailed Description:

Signal processing and system control 1006 controls and coordinates the operation of other components as well as providing signal processing for the mobile device . For example signal processing and system control 1006 may extract baseband signals from radio frequency signals received from the wireless communication logic 1026, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication logic 1026 logic. Signal processing and system control 1006 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication logic 1026 may further comprise memory 1016 which may be utilized by the signal processing and system control 1006 to read and write instructions (commands) and data (operands for the instructions). 

A camera 1022 may interface to image processing 1024 logic to record images and video from the environment. The image processing 1024 may operate to provide image/video enhancement, compression, and other transformations, and from there to the signal processing and system control 1006 for further processing and storage to memory 1016. Images and video stored in the memory 1016 may also be read by the signal processing and system control 1006 and output for display to a user of the mobile device 1000.

Audio signals may be provided to one or more speakers to create pressure waves in the external environment representing the audio. The mobile device 1000 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The mobile device 1000 may operate on power received from a battery 1020. The battery 1020 capability and energy supply may be managed by a power manager 1018

The mobile device 1000 may transmit wireless signals of various types and range (e.g., cellular, WiFi, BlueTooth, and near field communication i.e. NFC). The mobile device 1000 may also receive these types of wireless signals. Cellular wireless signals are  transmitted and received using wireless communication logic 1026 logic coupled to one or more antennae (not shown). Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

The device may utilize a GPU 1030 which controls a motor control 1014 to cause vibrations in response to events identified by signal processing and system control 1006, such as the received text messages, emails, incoming calls or other events that require the user or the device’s attention.

A subscriber identity module (navigation board 1028) may be present in some mobile devices, especially those operated on the Global System for Mobile Communication (GSM) network. The navigation board 1028 stores, in machine-readable memory, personal information of a mobile service subscriber, such as the subscriber’s cell phone number, address book, text messages, and other personal data. A user of the mobile device 1000 can move the navigation board 1028 to a different and maintain access to their personal information. A navigation board 1028 typically has a unique number which identifies the subscriber to the wireless network service provider.

The mobile device 1000 may include a navigation board 1028 including an audio encoder/decoder for encoding and decoding digital audio files or audio files stored by memory 1016, navigation board 1028, or received in real time via an antenna (not shown). The navigation board 1028 is controlled by the signal processing and system control 1006 and decoded audio is provided to one and more altimeter 1012 to create pressure waves in the external environment representing the audio.

Brief Description:

illustrates a network 1100 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 11, a network 1100 comprises a client device 1102, a client device 1104, a client device 1106, a server 1108, a server 1110, a router 1112, a network 1114, a network 1116, and a network 1118.

Brief Description:

illustrates a computing environment 1200 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 12, a computing environment 1200 comprises a CPU 1202, a bus 1204, a ROM 1206, a RAM 1208, an I/O adapter 1210, a memory structure 1212, a communication adapter 1214, a communication 1216, an interface device 1218, a user interface adapter 1220, an interface device 1222, an interface device 1224, an interface device 1226, a display adapter 1228, and a display device 1230.

Brief Description:

illustrates a system 1300 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 13, The system 1300 comprises a host OE 1302, a VOE A 1304, a VOE B 1306, and a VOE C 1308.

Brief Description:

illustrates a system 1400 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 14 a system 1400 comprises a host OE 1402, an OE D 1404, a network 1406, an OE B 1408, and an OE C 1410.

Brief Description:

illustrates a system 1500 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 15, a system 1500 comprises a gateway 1502, an OE A 1504, a network 1506, an OE B 1508, an OE C 1510, and a cloud 1512.

Brief Description:

illustrates a computing environment 1600 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 16, a computing environment 1600 comprises a device 1602. The device 1602 comprises a processor 1604, a persistent secondary storage 1608, an input device adapter 1610, an output device adapter 1612, a network interface adapter 1614, a bus 1616, a virtual processor memory 1618, an input device 1628, and an output device 1630. The virtual processor memory 1618 may comprise a physical processor memory 1606, an operating system 1620, an OCE 1622, applications 1624, and other libraries and subsystems 1626. Part of the persistent secondary storage 1608 and/or the bus 1616 may be comprised by the device 1602 and/or the virtual processor memory 1618.

Brief Description:

illustrates an operating environment 1700 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 17, an operating environment 1700 comprises an application 1702, a web browser 1704, a subsystem 1712, a network stack 1714, an application protocol service 1716, a GUI subsystem 1730, a graphics subsystem 1732, and an input driver 1734. The application 1702 may further comprise an application logic 1706 and a presentation controller 1726. The presentation controller 1726 may comprise a UI element handler 1722. The web browser 1704 may further comprise an application logic 1708, a content manager 1718, and a presentation controller 1728. The application logic 1708 and the presentation controller 1728 may comprise a SAA 1710 and a content handler 1720 in full or in part. The presentation controller 1728 may comprise a UI element handler 1724.

Brief Description:

illustrates a computing environment 1800 in accordance with one embodiment.

Detailed Description:

Referring to FIG. 18, a computing environment 1800 comprises a service application 1802, a network stack 1804, a network application platform 1806, an application protocol service 1808, a controller 1810, a model database 1816, and a template database 1822. The service application 1802 may further comprise a model 1812 and a view 1826. The model 1812 may further comprise a request handler 1814 and a data access manager 1818. The view 1826 may further comprise a template engine 1820, a response handler 1828, and a data-out 1830. The template database 1822 may further comprise a template 1824.

Brief Description:

illustrates an item 1900 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 2000 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an aspect of the subject matter in accordance with one embodiment.

Detailed Description:

FIG. 21 is a block diagram 2100 illustrating an architecture of software 102, which can be installed on any one or more of the devices described above. FIG. 21 is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the Software Architecture 2104 is implemented by hardware such as a machine 2102 of FIG. 2 that includes processors 2120, memory 2126, and I/O components 2138. In this example architecture, the software 102 can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software 102 includes layers such as an operating system 2112, libraries 2110, frameworks 2108, and applications 2106. Operationally, the applications 2106 invoke application programming interface (API) calls 112 through the software stack and receive messages 2152 in response to the API calls 2150, consistent with some embodiments.

In various implementations, the operating system 2112 manages hardware resources and provides common services. The operating system 2112 includes, for example, a kernel 2114, services 2116, and drivers 2122. The kernel 2114 acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel 2114 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services 2116 can provide other common services for the other software layers. The drivers 2122 are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers 2122 can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth.

In some embodiments, the libraries 2110 provide a low-level common infrastructure utilized by the applications 2106. The libraries 2110 can include system libraries 2118 (e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries 2110 can include API libraries 2124 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries 2110 can also include a wide variety of other libraries 2128 to provide many other APIs to the applications 2106.

The frameworks 2108 provide a high-level common infrastructure that can be utilized by the applications 2106, according to some embodiments. For example, the frameworks 2108 provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks 2108 can provide a broad spectrum of other APIs that can be utilized by the applications 2106, some of which may be specific to a particular operating system or platform.

In an example embodiment, the applications 2106 include a home application 2136, a contacts application 2130, a browser application 2132, a book reader application 2134, a location application 2142, a media application 2144, a messaging application 2146, a game application 2148, and a broad assortment of other applications such as a third-party application 2140. According to some embodiments, the applications 2106 are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications 2106, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application 2140 (e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application 2140 can invoke the API calls 2150 provided by the operating system 2112 to facilitate functionality described herein.

Brief Description:

illustrates a diagrammatic representation of a machine 2200 in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment.

Detailed Description:

FIG. 22 illustrates a diagrammatic representation of a machine 2200 in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically, FIG. 22 shows a diagrammatic representation of the machine 2200 in the example form of a computer system, within which instructions 2208 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 2200 to perform any one or more of the methodologies discussed herein may be executed.  For example the instructions 2208 may cause the machine 2200 to execute the method XYZ of FIG. 2.  Additionally, or alternatively, the instructions 2208 may implement FIGs. X-X, and so forth.  The instructions 2208 transform the general, non-programmed machine 2200 into a particular machine 2200 programmed to carry out the described and illustrated functions in the manner described.  In alternative embodiments, the machine 2200 operates as a standalone device or may be coupled (e.g., networked) to other machines.  In a networked deployment, the machine 2200 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.  The machine 2200 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 2208, sequentially or otherwise, that specify actions to be taken by the machine 2200.  Further, while only a single machine 2200 is illustrated, the term “machine” shall also be taken to include a collection of machines 200 that individually or jointly execute the instructions 2208 to perform any one or more of the methodologies discussed herein.

The machine 2200 may include processors 2202, memory 2204, and I/O components 2242, which may be configured to communicate with each other such as via a bus 2244.  In an example embodiment, the processors 2202 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 2206 and a processor 2210 that may execute the instructions 2208.  The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously.  Although FIG. 22 shows multiple processors 2202, the machine 2200 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.

The memory 2204 may include a main memory 2212, a static memory 2214, and a storage unit 2216, both accessible to the processors 2202 such as via the bus 2244.  The main memory 2204, the static memory 2214, and storage unit 2216 store the instructions 2208 embodying any one or more of the methodologies or functions described herein.  The instructions 2208 may also reside, completely or partially, within the main memory 2212, within the static memory 2214, within machine-readable medium 2218 within the storage unit 2216, within at least one of the processors 2202 (e.g., within the processor’s cache memory), or any suitable combination thereof, during execution thereof by the machine 2200

The I/O components 2242 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on.  The specific I/O components 2242 that are included in a particular machine will depend on the type of machine.  For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device.  It will be appreciated that the I/O components 2242 may include many other components that are not shown in FIG. 22.  The I/O components 2242 are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting.  In various example embodiments, the I/O components 2242 may include output components 2228 and input components 2230.  The output components 2228 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth.  The input components 2230 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.

In further example embodiments, the I/O components 2242 may include biometric components 2232, motion components 2234, environmental components 2236, or position components 2238, among a wide array of other components.  For example, the biometric components 2232 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like.  The motion components 2234 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth.  The environmental components 2236 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment.  The position components 2238 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.

Communication may be implemented using a wide variety of technologies.  The I/O components 2242 may include communication components 2240 operable to couple the machine 2200 to a network 2220 or devices 2222 via a coupling 2224 and a coupling 2226, respectively.  For example, the communication components 2240 may include a network interface component or another suitable device to interface with the network 2220.  In further examples, the communication components 2240 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities.  The devices 2222 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).

Moreover, the communication components 2240 may detect identifiers or include components operable to detect identifiers.  For example, the communication components 2240 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals).  In addition, a variety of information may be derived via the communication components 2240, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.

EXECUTABLE INSTRUCTIONS AND MACHINE STORAGE MEDIUM

The various memories (i.e., memory 2204, main memory 2212, static memory 2214, and/or memory of the processors 2202) and/or storage unit 2216 may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.  These instructions (e.g., the instructions 2208), when executed by processors 2202, cause various operations to implement the disclosed embodiments.

As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure.  The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors.  Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.  The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below.

TRANSMISSION MEDIUM

In various example embodiments, one or more portions of the network 2220 may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks.  For example, the network 2220 or a portion of the network 2220 may include a wireless or cellular network, and the coupling 2224 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling.  In this example, the coupling 2224 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1xRTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology.

The instructions 2208 may be transmitted or received over the network 2220 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 2240) and utilizing any one of a number of well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)).  Similarly, the instructions 2208 may be transmitted or received using a transmission medium via the coupling 2226 (e.g., a peer-to-peer coupling) to the devices 2222.  The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.  The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions 2208 for execution by the machine 2200, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth.  The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal.

COMPUTER-READABLE MEDIUM

The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure.  The terms are defined to include both machine-storage media and transmission media.  Thus, the terms include both storage devices/media and carrier waves/modulated data signals.

Brief Description:

illustrates an item 2300 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 2400 in accordance with one embodiment.

Detailed Description:

Parts List

100

system

102

network

104

server

106

client device

200

computing device

202

monitor or graphical user interface

204

processor(s)

206

output device(s)

208

input device(s)

210

volatile memory

212

communication network interface

214

nonvolatile memory

216

communication network

218

bus subsystem

220

data processing system

222

logic

300

system

302

bus

304

memory

306

display

308

network interface

310

central processing unit

312

operating system

314

database

316

non-transitory computer-readable medium

400

digital apparatus

402

CPU

404

input devices

406

memory

408

output devices

410

storage devices

412

network interface

414

logic

500

mobile device

502

antenna

504

control logic

506

wireless communication logic

508

memory

510

power manager

512

battery

514

user interface logic

516

logic

600

arrays

700

electrical, logical, mathmatical symbols

800

mobile device

802

antenna

804

signal processing and system control

806

wireless communication

808

memory

810

power manager

812

battery

814

user interface

816

camera

818

image processing

900

mobile device

902

antenna

904

antenna

906

signal processing and system control

908

wireless communication

910

SIM

912

speaker

914

vibration generator

916

memory

918

power manager

920

battery

922

user interface

924

camera

926

image processing

928

wireless communication logic

930

audio driver

932

haptic driver

1000

mobile device

1002

gyroscope

1004

memory

1006

signal processing and system control

1008

other transducers

1010

DSP

1012

altimeter

1014

motor control

1016

memory

1018

power manager

1020

battery

1022

camera

1024

image processing

1026

wireless communication logic

1028

navigation board

1030

GPU

1100

network

1102

client device

1104

client device

1106

client device

1108

server

1110

server

1112

router

1114

network

1116

network

1118

network

1200

computing environment

1202

CPU

1204

bus

1206

ROM

1208

RAM

1210

I/O adapter

1212

memory structure

1214

communication adapter

1216

communication

1218

interface device

1220

user interface adapter

1222

interface device

1224

interface device

1226

interface device

1228

display adapter

1230

display device

1300

system

1302

host OE

1304

VOE A

1306

VOE B

1308

VOE C

1400

system

1402

host OE

1404

OE D

1406

network

1408

OE B

1410

OE C

1500

system

1502

gateway

1504

OE A

1506

network

1508

OE B

1510

OE C

1512

cloud

1600

computing environment

1602

device

1604

processor

1606

physical processor memory

1608

persistent secondary storage

1610

input device adapter

1612

output device adapter

1614

network interface adapter

1616

bus

1618

virtual processor memory

1620

operating system

1622

OCE

1624

applications

1626

other libraries and subsystems

1628

input device

1630

output device

1700

operating environment

1702

application

1704

web browser

1706

application logic

1708

application logic

1710

SAA

1712

subsystem

1714

network stack

1716

application protocol service

1718

content manager

1720

content handler

1722

UI element handler

1724

UI element handler

1726

presentation controller

1728

presentation controller

1730

GUI subsystem

1732

graphics subsystem

1734

input driver

1800

computing environment

1802

service application

1804

network stack

1806

network application platform

1808

application protocol service

1810

controller

1812

model

1814

request handler

1816

model database

1818

data access manager

1820

template engine

1822

template database

1824

template

1826

view

1828

response handler

1830

data-out

1900

item

2000

item

2002

2004

2006

2008

2100

block diagram

2102

machine

2104

Software Architecture

2106

applications

2108

frameworks

2110

libraries

2112

operating system

2114

kernel

2116

services

2118

system libraries

2120

processors

2122

drivers

2124

API libraries

2126

memory

2128

other libraries

2130

contacts application

2132

browser application

2134

book reader application

2136

home application

2138

I/O components

2140

third-party application

2142

location application

2144

media application

2146

messaging application

2148

game application

2150

API calls

2152

messages

2200

machine

2202

processors

2204

memory

2206

processor

2208

instructions

2210

processor

2212

main memory

2214

static memory

2216

storage unit

2218

machine-readable medium

2220

network

2222

devices

2224

coupling

2226

coupling

2228

output components

2230

input components

2232

biometric components

2234

motion components

2236

environmental components

2238

position components

2240

communication components

2242

I/O components

2244

bus

2300

item

2302

2304

2306

2308

2310

2312

2314

2316

2318

2320

2322

2400

item


Terms/Definitions

inventive

Providing Controlled Pulses for Quantum Computing


Drawings

Brief Description:

Figure 1 shows an exemplaryquantum mechanical computer radio frequency (RF) signaling system, according to one embodiment;

Detailed Description:

Figure 1 shows an exemplary embodiment of a quantum mechanical computer radio frequency (RF) signaling system 100. The quantum mechanical computer radio frequency (RF) signaling system 100 may include a transmission lines 102, a plurality of networks of reactive electrical components a 106-112 coupled to the transmission lines 102, a plurality of switch units 114-120 respectively coupled to the plurality of networks of reactive electrical components a 106-112, a plurality of output-stage networks of reactive electrical components 122-128 respectively coupled to the plurality of switch units 114-120, and a plurality of substantially identical qubits 130-136 respectively coupled to the output-stage networks of reactive electrical components 122-128. The quantum mechanical computer radio frequency (RF) signaling system 100 may also include a control logic unit 112 having respective control outputs 138-144 for controlling the actuation of the switches within switch units 114-120. The control logic unit 112 may be implemented in hardware, firmware, software, or any combination thereof. For illustrative brevity only four (4) qubits 130-136 are depicted in Figure 1. It may, however, be appreciated that any number of qubits (i.e., 1-N) can be coupled to the transmission lines 102 via corresponding networks of reactive electrical components and controllable switch units. 

The quantum mechanical computer radio frequency (RF) signaling system 100 may be maintained at cryogenic temperatures below one hundred (100) millikelvins (mK) in order to maintain the signaling system 100 at superconducting temperatures. For example, the quantum mechanical computer radio frequency (RF) signaling system 100 may be cooled in a cryostat to a temperature of about 30 mK. 

In operation, a radio frequency (RF) pulse signal is applied to the transmission lines 102. The transmission lines 102 is terminated by an impedance matching resistor 146 in order to mitigate RF signal reflections associated with the radio frequency (RF) pulse signal propagating along the transmission lines 102. Referring to Figure 7, an example of a radio frequency (RF) pulse signal 700 that is applied to the transmission lines 102 (Figure 1) is depicted, whereby a 4 Ghz RF signal is generated over a 20 nanosecond (ns) pulse period (T.sub.pulse) at 1 microsecond (.mu.s) intervals (T.sub.int). Alternatively, according to other non-limiting examples, the radio frequency (RF) pulse signal 700 may include an RF signal in the range of about 1-10 Ghz that is generated over a pulse period (T.sub.pulse) of about 10-500 ns at intervals (T.sub.int) in the order of microseconds (.mu.s), milliseconds (ms) or seconds (s). 

Referring back to Figure 1, the radio frequency (RF) pulse signal 700 (Figure 7) may be tapped off the transmission lines 102 and propagate in the direction of arrow A.sub.1. As depicted, the radio frequency (RF) pulse signal 700 propagates in the direction of arrow A.sub.1 and is input to the network of reactive electrical components a 106. The network of reactive electrical components a 106 attenuates the amplitude of the radio frequency (RF) pulse signal 700 by a factor of about 10-100. The attenuated radio frequency (RF) pulse signal 700 may then be received by switch unit a 114, whereby depending on the configuration of switches R.sub.1 and R.sub.2, qubit a 130 undergoes either a predefined change in the linear combination of at least two quantummechanical eigenstates, or maintains its current quantum mechanical eigenstate. Specifically, using control output a 138, if switch R.sub.1 of switch unit a 114 is actuated to a closed position while switch R.sub.2 of switch unit a 114 is actuated to an open position, the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 and is applied to qubit a 130. By selecting the frequency of the attenuated radio frequency (RF) pulse signal 700 to substantially match the resonance of the qubit a 130, the qubit a 130 undergoes a predetermined rotation based on the amplitude of the attenuated radio frequency (RF) pulse signal 700. 

In some implementations, the output-stage network of reactive electrical components a 122 may be optionally omitted such that the attenuated radio frequency (RF) pulse signal 700 (Figure 7) passes through switch R.sub.1 to qubit a 130. In other implementations, the output-stage network of reactive electrical components a 122 may be included such that the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 to qubit a 130 via the output-stage network of reactive electrical components a 122. As depicted in Figure 6, the output-stage network of reactive electrical components 600 may be substantially identical to that of the network of reactive electrical components a 106. However, in some implementations, the output-stage network of reactive electrical components 600 may be different to that of the network of reactive electrical components a 106. Moreover, each of the output-stage network of reactive electrical components 600 and the network of reactive electrical components a 106 may include a mix of different reactive components (e.g., capacitors and inductors). The output-stage network of reactive electrical components 600 further attenuates the radio frequency (RF) pulse signal 700 that passes through switch R.sub.1 to qubit a 130. Additionally, the reactive components of the output-stage network of reactive electrical components 600 isolate the qubit a 130 from the resistive characteristics of switches R.sub.1 and R.sub.2 within switch unit a 114. The resistive nature of switches R.sub.1 and R.sub.2 (e.g., Field Effecttransistor switches: FET switches) may accordingly cause the qubit a 130 to gradually loose its quantum eigenstate in the absence of such isolation.

Alternatively, as shown in Figure 1, using control output a 138, if switch R.sub.1 of switch unit a 114 is actuated to an open position while switch R.sub.2 of switch unit a 114 is actuated to a closed position, the qubit a 130 maintains its current eigenstate on the basis that it is isolated from the attenuated radio frequency (RF) pulse signal 700 (Figure 7) received from the network of reactive electrical components a 106. By closing switch R.subR.sub.2, the output terminal “o” of switch R.sub.1 is electrically coupled to ground via switch R.sub.2. Thus, any electrical leakage current across open-circuit switch R.subR.sub.1 (e.g., FET switch) may accordingly be diverted to ground via switch R.sub.2. By diverting this leakage current, potential quantum state changes associated with the qubit a 130 may be avoided. Thus, the qubit a 130 experiences longer coherence times

As further shown in Figure 1, the radio frequency (RF) pulse signal 700 (Figure 7) may be tapped off the transmission lines 102 and also propagate in the direction of arrow A.sub.2. As depicted, the radio frequency (RF) pulse signal 700 propagates in the direction of arrow A.sub.2 and is input to the network of reactive Electrical components b 108. The network of reactive Electrical components b 108 accordingly attenuates the amplitude of the radio frequency (RF) pulse signal 700 by a factor of about 10-100. The attenuated radio frequency (RF) pulse signal 700 (Figure 7) may then be received by switch unit b 116, whereby depending on the configuration of switches R.sub.1 and R.sub.2, qubit b 132 undergoes either a predefined change in the linear combination of at least two quantummechanical eigenstates, or maintains its current quantum mechanical eigenstate. Specifically, using control output b 140, if switch R.sub.1 of switch unit b 116 is actuated to a closed position while switch R.sub.2 of switch unit b 116 is actuated to an open position, the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 and is applied to qubit b 132. Since the frequency of the attenuated radio frequency (RF) pulse signal 700 substantially matches the resonance of qubit b 132, as with qubit a 130, this qubit b 132 also undergoes the predetermined rotation based on the amplitude of the attenuated radio frequency (RF) pulse signal 700. 

In some implementations, the output-stage network of reactive electrical components b 124 may be optionally omitted such that the attenuated radio frequency (RF) pulse signal 700 (Figure 7) passes through switch R.sub.1 to qubit b 132. In other implementations, the output-stage network of reactive electrical components b 124 may be included such that the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 to qubit b 132 via the output-stage network of reactive electrical components b 124. As depicted in Figure 6, output-stage network of reactive electrical components 600 may be substantially identical to that of the network of reactive Electrical components b 108. As such the output-stage network of reactive electrical components 600 further attenuates the radio frequency (RF) pulse signal 700 that passes through switch R.sub.1 to qubit b 132. Additionally, the reactive components of the output-stage network of reactive electrical components 600 isolate the qubit b 132 from the resistive characteristics of switches R.sub.1 and R.sub.2 within switch unit b 116. The resistive nature of switches R.sub.1 and R.sub.2 (e.g., Field Effecttransistor switches: FET switches) may accordingly cause the qubit b 132 to gradually loose its quantum eigenstate in the absence of such isolation. 

Alternatively, using control output b 140, if switch R.sub.1 of switch unit b 116 is actuated to an open position while switch R.sub.2 of switch unit b 116 is actuated to a closed position, the qubit b 132 maintains its current eigenstate on the basis that it is isolated from the attenuated radio frequency (RF) pulse signal 700 (Figure 7) received from the network of reactive Electrical components b 108. By closing switch R.subR.sub.2 of switch unit b 116, the output terminal “o” of switch R.sub.1 is electrically coupled to ground via switch R.sub.2. Thus, any electrical leakage current across the open-circuit switch R.subR.sub.1 (e.g., FET switch) of switch unit b 116 may accordingly be diverted to ground via switch R.sub.2. By diverting this leakage current, potential quantum state changes associated with the qubit b 132 may be avoided. Thus, the qubit b 132 experiences longer coherence times

Still referring to Figure 1, the radio frequency (RF) pulse signal 700 (Figure 7) may be tapped off the transmission lines 102 and further propagate in the direction of arrow A.sub.3. As depicted, the radio frequency (RF) pulse signal 700 propagates in the direction of arrow A.sub.3 and is input to the network of reactive Electrical components c 110. The network of reactive Electrical components c 110 accordingly attenuates the amplitude of the radio frequency (RF) pulse signal 700 by a factor of about 10-100. The attenuated radio frequency (RF) pulse signal 700 (Figure 7) may then be received by switch unit c 118, whereby depending on the configuration of switches R.sub.1 and R.sub.2, qubit c 134 undergoes either a predefined change in the linear combination of at least two quantummechanical eigenstates, or maintains its current quantum mechanical eigenstate. Specifically, using control output c 142, if switch R.sub.1 of switch unit c 118 is actuated to a closed position while switch R.sub.2 of switch unit c 118 is actuated to an open position, the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 and is applied to qubit c 134. Since the frequency of the attenuated radio frequency (RF) pulse signal 700 substantially matches the resonance of qubit c 134, as with qubits 130-132, this qubit c 134 also undergoes the predetermined rotation based on the amplitude of the attenuated radio frequency (RF) pulse signal 700. 

In some implementations, the output-stage network of reactive electrical components c 126 may be optionally omitted such that the attenuated radio frequency (RF) pulse signal 700 (Figure 7) passes through switch R.sub.1 to qubit c 134. In other implementations, the output-stage network of reactive electrical components c 126 may be included such that the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 to qubit c 134 via the output-stage network of reactive electrical components b 124. As depicted in Figure 6, output-stage network of reactive electrical components 600 may be substantially identical to that of the output-stage network of reactive electrical components c 110. As such the output-stage network of reactive electrical components 600 further attenuates the radio frequency (RF) pulse signal 700 that passes through switch R.sub.1 to qubit c 134. Additionally, the reactive components of the output-stage network of reactive electrical components 600 isolate the qubit c 134 from the resistive characteristics of switches R.sub.1 and R.sub.2 within switch unit c 118. The resistive nature of switches R.sub.1 and R.sub.2 (e.g., Field Effecttransistor switches: FET switches) may accordingly cause the qubit c 134 to gradually loose its quantum eigenstate in the absence of such isolation. 

Alternatively, using control output c 142, if switch R.sub.1 of switch unit c 118 is actuated to an open position while switch R.sub.2 of switch unit c 118 is actuated to a closed position, the qubit c 134 maintains its current eigenstate on the basis that it is isolated from the attenuated radio frequency (RF) pulse signal 700 (Figure 7) received from the network of reactive Electrical components c 110. By closing switch R.subR.sub.2 of switch unit c 118, the output terminal “o” of switch R.sub.1 is electrically coupled to ground via switch R.sub.2. Thus, any electrical leakage current across the open-circuit switch R.subR.sub.1 (e.g., FET switch) of switch unit c 118 may accordingly be diverted to ground via switch R.sub.2. By diverting this leakage current, potential quantum state changes associated with the qubit c 134 may be avoided. Thus, the qubit c 134 experiences longer coherence times

Still referring to Figure 1, the radio frequency (RF) pulse signal 700 (Figure 7) may be tapped off the transmission lines 102 and further propagate in the direction of arrow A.sub.4. As depicted, the radio frequency (RF) pulse signal 700 also propagates in the direction of arrow A.sub.4 and is accordingly input to the network of reactive Electrical components d 112. The network of reactive Electrical components d 112 thus attenuates the amplitude of the radio frequency (RF) pulse signal 700 by a factor of about 10-100. The attenuated radio frequency (RF) pulse signal 700 may then be received by switch unit d 120, whereby depending on the configuration of switches R.sub.1 and R.sub.2, qubit d 136 undergoes either a predefined change in the linear combination of at least two quantummechanical eigenstates, or maintains its current quantum mechanical eigenstate. Specifically, using control output d 144, if switch R.sub.1 of switch unit d 120 is actuated to a closed position while switch R.sub.2 of switch unit d 120 is actuated to an open position, the attenuated radio frequency (RF) pulse signal 700 (Figure 7) passes through switch R.sub.1 and is applied to qubit d 136. Since the frequency of the attenuated radio frequency (RF) pulse signal 700 substantially matches the resonance of qubit d 136, as with qubits 130-134, this qubit d 136 also undergoes the predetermined rotation based on the amplitude of the attenuated radio frequency (RF) pulse signal 700. 

In some implementations, the output-stage network of reactive electrical components d 128 may be optionally omitted such that the attenuated radio frequency (RF) pulse signal 700 (Figure 7) passes through switch R.sub.1 to qubit d 136. In other implementations, the output-stage network of reactive electrical components d 128 may be included such that the attenuated radio frequency (RF) pulse signal 700 passes through switch R.sub.1 to qubit d 136 via the output-stage network of reactive electrical components d 128. As depicted in Figure 6, output-stage network of reactive electrical components 600 may be substantially identical to that of the network of reactive Electrical components d 112. As such the output-stage network of reactive electrical components 600 further attenuates the radio frequency (RF) pulse signal 700 that passes through switch R.sub.1 to qubit d 136. Additionally, the reactive components of the output-stage network of reactive electrical components 600 isolate the qubit d 136 from the resistive characteristics of switches R.sub.1 and R.sub.2 within switch unit d 120. The resistive nature of switches R.sub.1 and R.sub.2 (e.g., Field Effecttransistor switches: FET switches) may accordingly cause the qubit d 136 to gradually loose its quantum eigenstate in the absence of such isolation. 

Alternatively, using control output d 144, if switch R.sub.1 of switch unit d 120 is actuated to an open position while switch R.sub.2 of switch unit d 120 is actuated to a closed position, the qubit d 136 maintains its current eigenstate on the basis that it is isolated from the attenuated radio frequency (RF) pulse signal 700 (Figure 7) received from the network of reactive Electrical components d 112. By closing switch R.subR.sub.2 of switch unit d 120, the output terminal “o” of switch R.sub.1 is electrically coupled to ground via switch R.sub.2. Thus, any electrical leakage current across the open-circuit switch R.subR.sub.1 (e.g., FET switch) of switch unit d 120 may accordingly be diverted to ground via switch R.sub.2. By diverting this leakage current, potential quantum state changes associated with the qubit d 136 may be avoided. Thus, the qubit d 136 experiences longer coherence times

The attenuation of the radio frequency (RF) pulse signal 700 (Figure 7) by the networks of reactive electrical components a 106-112 allows individual signal amplitude adjustment and mitigates interactions between the qubits 130-136. Referring to Figure 5, an exemplary network of reactive electrical components 502 that may be used for networks 106-112 (Figure 1) is depicted. The network of reactive electrical components 502 may be described by its equivalent circuit 504. As shown, an inputRF pulse signal (i.e., RF.sub.1) is attenuated by the divider network of capacitors (i.e., reactive components) to provide an outputattenuated RF pulse signal (i.e., RF.sub.2). In particular, the relationship between the inputRF pulse signal (i.e., RF.sub.1) and the outputattenuated RF pulse signal (i.e., RF.sub.2) is given by: 

RF 2 = RF 1 ( C 1 C 1 + C 2 + C adj ) equation 1 ##EQU00001## 

Whereby C.sub.1 is an input capacitive reactive component having an input terminal coupled to the transmission lines 102 (Figure 1) and an output terminal coupled to parallel capacitive reactive components C.sub.adj and C.sub.2. Thus, the input capacitive reactive component C.sub.1 and the parallel configured capacitive reactive components C.sub.adj, C.sub.2 are in series. Based on equation 1, by increasing the capacitance value of variable capacitor C.sub.adj, the attenuation of the inputRF pulse signal (i.e., RF.sub.1) is also increased. Conversely, by decreasing the capacitance value of variable capacitor C.sub.adj, the attenuation of the inputRF pulse signal (i.e., RF.sub.1) is accordingly reduced. 

Referring to Figure 6, the depicted output-stage network of reactive electrical components 600 may be used for networks 122-128 of Figure 1. The output-stage network of reactive electrical components 600 may be described by its equivalent circuit 602. As shown, the attenuated RF pulse signal RF.sub.2 output from network 502 (Figure 5) is (optionally) further attenuated (i.e., RF pulse signal RF.sub.3) by the divider network of capacitors (i.e., reactive components) corresponding to output-stage network of reactive electrical components 600. In particular, the relationship between the inputted attenuated RF pulse signal RF.sub.2 and the outputted further attenuated RF pulse signal RF.sub.3 is given by: 

RF 3 = RF 2 ( C 1 ‘ C 1 ‘ + C 2 ‘ + C adj ‘ ) equation 2 ##EQU00002## 

Whereby C’.sub.1 is an input capacitive reactive component having an input terminal coupled to output terminal `o` (Figure 1) of a respective switch unit and an output terminal coupled to parallel capacitive reactive components C’.sub.adj and C’.sub.2. Thus, the input capacitive reactive component and the parallel configured capacitive reactive components C’.sub.adj, C’.sub.2 are in series. Based on equation 2, by increasing the capacitance value of variable capacitor C’.sub.adj, the attenuation of the attenuated input RF pulse signal (i.e., RF.sub.2) is also increased. Conversely, by decreasing the capacitance value of variable capacitor C’.sub.adj, the attenuation of the attenuated input RF pulse signal (i.e., RF.sub.2) is accordingly reduced. As previously described, the circuits depicted in both FIGS. 5 and 6 may be identical, thus applying the same attenuation to the received RF pulse signals. Moreover, the circuits depicted in both FIGS. 5 and 6 are utilized in both the plurality of networks of reactive electrical components a 106-112 (Figure 1) and the plurality of output-stage networks of reactive electrical components 122-128 (Figure 1), respectively. 

Referring to Figure 3, an exemplary controllable reactive component 302 used to implement a variable capacitor 904 is depicted. The exemplary controllable reactive component 302 represented by variable capacitor 904 may be used in both the plurality of networks of reactive electrical components a 106-112 (Figure 1: C.sub.adj) and the plurality of output-stage networks of reactive electrical components 122-128 (Figure 1; and Figure 6: C.sub.adj), respectively. As depicted, the controllable reactive component 302 may include a parallel configuration of multiple capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, and C.sub.adj4. Each of the capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, C.sub.adj4 are connected to ground via respective switches S.sub.adj1, S.sub.adj2, S.sub.adj3, and S.sub.adj4. In particular, one terminal of each of the capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, C.sub.adj4 is coupled together, while the other terminal of each of the capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, C.sub.adj4 is connected in series to respective switches S.sub.adj1, S.sub.adj2, S.sub.adj3, and S.sub.adj4. In operation, by actuating the switches S.sub.adj1, S.sub.adj2, S.sub.adj3, S.sub.adj4 to a closed position, the capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, C.sub.adj4 are coupled to ground and remain part of the parallel configuration of capacitors. Alternatively, by actuating the switches S.sub.adj1, S.sub.adj2, S.sub.adj3, S.sub.adj4 to an open position, the capacitors C.sub.adj1, C.sub.adj2, C.sub.adj3, C.sub.adj4 are not coupled to ground and are thus removed from the parallel configuration of capacitors. For example, by actuating switches S.sub.adj1 and S.sub.adj4 to a closed position and switches S.sub.adj2 and S.sub.adj3 to an open position, capacitors C.sub.adj1 and C.sub.adj4 are coupled to ground and in a parallel configuration, while capacitors C.sub.adj2 and C.sub.adj2 are not within the parallel configuration. The total capacitance is thus the sum of capacitors C.sub.adj1 and C.sub.adj4. By varying the switch positions, different capacitance values can therefore be obtained for altering the attenuation factors within the networks of reactive electrical components a 106-112 and the plurality of output-stage networks of reactive electrical components 122-128. For example, in order to increase the total capacitance given by the sum of capacitors C.sub.adj1 and C.sub.adj4, switch S.sub.adj3 may additionally be actuated to a closed position. The total capacitance is now the sum of capacitors C.sub.adj1, C.sub.adj3, and C.sub.adj4. Thus, the controllable reactive component 302 provides an exemplary adjustable reactance within the plurality of networks of reactive electrical components a 106-112 (Figure 1) and the plurality of output-stage networks of reactive electrical components 122-128 (Figure 1). Accordingly, varying this adjustable reactance in turn varies the attenuation provided by the plurality of networks of reactive electrical components a 106-112 (Figure 1) and the plurality of output-stage networks of reactive electrical components 122-128 (Figure 1). 

For illustrative brevity only four (4) parallel capacitors and switches are depicted in Figure 3. However, it may be appreciated that any number of parallel capacitors may be utilized in order to establish the requisite resolution of attenuation factor variation exhibited by any one of the plurality of networks of reactive electrical components a 106-112 (Figure 1) and the optionally provided plurality of output-stage networks of reactive electrical components 122-128 (Figure 1). The capacitors associated with the networks of reactive electrical components a 106-112 (Figure 1) and the plurality of output-stage networks of reactive electrical components 122-128 (Figure 1) may have capacitance values in the range of 0.1-10 femtofarads (fFs). However, greater or lesser values may be contemplated. 

Referring to Figure 2, the switches S.sub.adj1, S.sub.adj2, S.sub.adj3, S.sub.adj4 (Figure 3) used in the exemplary controllable reactive component 302 (Figure 3) may be implemented by a transistor device. Thus, switch 802 may be implemented by FET device 804. More specifically, by applying a control voltage to the gate G of the FET device 804, a closed electrical circuit connection may be established between the drain D and the Source S of the device 804. 

Referring to Figure 4, each of the qubits 130-136 shown in Figure 1 may, for example, include a transmon 402. As depicted, the transmon 402 may be characterized as a resonant circuit 404 having a capacitance C and a non-linear inductance LAO. Thus, when the transmon receives an RF pulse signal having a frequency that is substantially the same as (i.e., matches) its resonant frequency, the transmon may accordingly oscillate backwards and forwards between, for example, two (2) quantummechanical eigenstates. The oscillation frequency backwards and forwards between these two states occurs at a lower frequency that is proportional to the amplitude of the RF pulse signal. Therefore, as previously described, by controlling the amplitude of the RF pulse signal that is applied to the transmon 402, a desired quantum mechanical eigenstate may be achieved at the end of each pulse period. The transmon 402 may include a josephson junction formed by a metal-insulator-metal (MIM) layer of aluminum, aluminum oxide, and aluminum

Referring back to Figure 1, in operation, two or more of the substantially identical qubits 130-136 may require a predefined change in their respective quantum mechanical eigenstates (e.g., a .pi./2 rotation). Referring to Figure 7, a qubit’s angular rotation is proportional to the product of the amplitude (V.sub.rf) and pulse period (T.sub.pulse) of the radio frequency (RF) pulse signal 700. Since the pulse period (T.sub.pulse) of the radio frequency (RF) pulse signal 700 is the same for all of the substantially identical qubits 130-136 (Figure 1), adjustments to each individual qubit’s 130-136 (Figure 1) angular rotation is accomplished by varying the amplitude (V.sub.rf) of the radio frequency (RF) pulse signal 700 via the respective networks of reactive electrical components a 106-108 (Figure 1). Referring back to Figure 1, in particular, the respective networks of reactive electrical components a 106-108 provide such an adjustment by means of variable capacitor C.sub.adj. Also, as previously described, in embodiments that further include output-stage network of reactive electrical components a 122-108d, the amplitude V.sub.rf (Figure 7) of the radio frequency (RF) pulse signal 700 (Figure 7) may be further adjusted using variable capacitors C’.sub.adj (Figure 6). 

For example, the radio frequency (RF) pulse signal 700 (Figure 7) may be applied to qubit a 130 and qubit b 132 by configuring respective switch unit a 114 and switch unit b 116 accordingly. Since qubit a 130 and qubit b 132 are substantially identical and receive the same radio frequency (RF) pulse signal 700 that is tapped off the transmission lines 102, there may be an expectation that the qubit a 130, qubit b 132 underdo the same quantum mechanical rotation. This expectation may however be thwarted by a difference in reactive component tolerances between the network of reactive electrical components a 106 corresponding to qubit a 130 and the network of reactive Electrical components b 108 corresponding to qubit b 132. More specifically, although the capacitors (i.e., reactive components) within qubit a 130‘s network of reactive electrical components a 106 are manufactured to be the same as qubit b 132‘s network of reactive Electrical components b 108, the manufacturing process may cause slight variations in the capacitance values between the networks of reactive electrical components a 106, 108. For instance, although capacitor C.sub.2 within the qubit a 130‘s network of reactive electrical components a 106 is manufactured to have the same capacitance as capacitor C.sub.2 within qubit b 132‘s network of reactive Electrical components b 108, due to manufacturing tolerances, the C.sub.2 capacitor values in the networks of reactive electrical components a 106, 108 may slightly differ. This causes a slight difference in capacitive reactance value, which in turn contributes to differences in attenuation between network 106 and network 108. Thus, for the same applied RF pulse signal 700 (Figure 7), qubit a 130 and qubit b 132 undergo different rotations as a result of the RF pulse signal 700 being attenuated by slightly different amounts before being applied to the qubit a 130, qubit b 132. However, during calibration, each of the network of reactive electrical components a 106, 108 can be individually adjusted to compensate for such differences in attenuation resulting from reactive component tolerances. Thus, by making the appropriate adjustments, each of qubit a 130 and qubit b 132 receive an attenuated RF pulse signal having the same amplitude, which subsequently causes both qubits to undergo the same predetermined rotation (e.g., a .pi./2 rotation). More particularly, the C.sub.adj capacitance values of the network of reactive electrical components a 106, 108 may be adjusted to compensate for such differences in attenuation resulting from the reactive component tolerances associated with the network of reactive electrical components a 106, 108. 

Brief Description:

illustrates an item 200 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 300 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 400 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 500 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 600 in accordance with one embodiment.

Detailed Description:
Brief Description:

illustrates an item 700 in accordance with one embodiment.

Detailed Description:
Brief Description:

Figure 8 shows a quantum mechanical computer radio frequency (RF) signaling system, according to another embodiment

Detailed Description:

Figure 8 shows a quantum mechanical computer radio frequency (RF) signaling system 800, according to another embodiment. In particular, the quantum mechanical computer radio frequency (RF) signaling system 800 enables quantum entanglement between reactively coupled qubits. As depicted, quantum mechanical computer radio frequency (RF) signaling system 800 includes transmission lines 102, a plurality of networks of reactive electrical components  802-804 coupled to the transmission lines 102,  a switch control unit 812 having control outputs 138-144 that are coupled to the plurality of networks of reactive electrical components  802-804d, and a plurality of  qubit x 806 and qubit y 808 coupled to the plurality of networks of reactive electrical components  802-804d. As further depicted, a reactive coupling element 810 may couple  qubit x 806 and qubit y 808. The reactive coupling element 810 may include a network of reactive components, a single capacitor between points A and B, or a transmission line capacitively coupled to links 825 and 827. Using reactive coupling element 810, a quantum entanglement condition between qubit x 806 and qubit y 808 may be accomplished. 

The switch control unit 812 includes respective control outputs 138-144 that, among other things, control the actuation of switches within the plurality of networks of reactive electrical components  802-804d. In particular, the plurality of networks of reactive electrical components  802-804d may be identical to those utilized in Figure 1. The actuation of such switches is depicted in Figure 3, whereby under the control of a reactive network switch control unit 906, different capacitance values and attenuation factors can be set. Switch control unit 812 may be identical to reactive network switch control unit 906, and thus controls the capacitance values and attenuation factors for the plurality of networks of reactive electrical components  802-804d. Although not depicted in Figure 8, as with Figure 1, a switch unit identical to or similar to switch unit a 114 (Figure 1) may be utilized between networks of reactive electrical components 802 and 804b, and  qubit x 806. Moreover, a switch unit identical to or similar to switch unit b 116 (Figure 1) may be utilized between networks of reactive electrical components 804c and 804d, and qubit y 808. Thus, depending on the configuration of switches R.sub.1 and R.sub.2 within each of the switch unit a 114 and switch unit b 116 (Figure 1),  qubit x 806 and qubit y 808 undergo either a predefined change in the linear combination of at least two quantummechanical eigenstates, or maintain their current quantum mechanical eigenstate. The switch control unit 812 may be implemented in hardware, firmware, software, or any combination thereof. For illustrative brevity only two (2) adjacent qubit x 806 and qubit y 808 are depicted in Figure 8. It may, however, be appreciated that any number of qubits (i.e., 1-N) can be coupled to the transmission lines 102,  via corresponding networks of reactive electrical components

In operation, radio frequency (RF) pulse signals f.sub.1 and f.sub.2 are applied to respective transmission lines 102. The transmission lines 102 are each terminated by an impedance matching resistor 146 R in order to mitigate RF signal reflections associated with the radio frequency (RF) pulse signals f.sub.1, f.Sub.2 propagating along each of the transmission lines 102. RF pulse signals f.sub.1 and f.sub.2 may be similar to the RF pulse signal illustrated in Figure 7. For example, RF pulse signals f.sub.1 may include a 5.00 Ghz RF signal that is generated over a 20 nanosecond (ns) pulse period (T.sub.pulse) at 1 microsecond (.mu.s) intervals (T.sub.int). Also, RF pulse signals f.sub.2 may include a 5.05 Ghz RF signal that is generated over a 20 nanosecond (ns) pulse period (T.sub.pulse) at 1 microsecond (.mu.s) intervals (T.sub.int). 

In the embodiment of Figure 8, each of the  qubit x 806 and qubit y 808 can be driven by one of two different RF pulse signals (f.sub.1 or f.sub.2) that are each attenuated by a network of reactive electrical components. Furthermore, adjacent  qubit x 806 and qubit y 808 may be reactively coupled to each other via the reactive coupling element 810. Specifically, as depicted in Figure 8, an RF pulse signal f.sub.1 may be applied to, and propagate, along transmission lines 102. The RF pulse signal f.sub.1 is then tapped off the transmission lines 102 and attenuated by the network of reactive electrical components  802, whereby the attenuated RF pulse signal f.sub.1 is applied to  qubit x 806. Another RF pulse signal f.sub.2 may be applied to, and propagate, along transmission line  . The RF pulse signal f.sub.2 is then tapped off the transmission line   and attenuated by the network of reactive electrical components 804b, whereby the attenuated RF pulse signal f.sub.2 is also applied to  qubit x 806. Thus,  qubit x 806 may be driven either by an attenuated version of RF pulse signal f.sub.1 or an attenuated version of RF pulse signal f.sub.2.

Similarly, as further depicted in Figure 8, RF pulse signal f.sub.1 is also tapped off transmission lines 102 and attenuated by the network of reactive electrical components 804c, whereby the attenuated RF pulse signal f.sub.1 is applied to qubit y 808. RF pulse signal f.sub.2 is also tapped off transmission line   and attenuated by the network of reactive electrical components 804d, whereby the attenuated RF pulse signal f.sub.2 is also applied to qubit y 808. Thus, qubit y 808 may be driven either by an attenuated version of RF pulse signal f.sub.1 or an attenuated version of RF pulse signal f.sub.2. 

As previously described in relation to Figure 7, a qubit’s angular rotation is proportional to the product of the amplitude (V.sub.rf) and pulse period (T.sub.pulse) of the radio frequency (RF) pulse signal 700. Since the pulse period (T.sub.pulse) of the radio frequency (RF) pulse signal 700 is related to its frequency, which is set to the resonance frequency of the  qubit x 806 and qubit y 808 (Figure 8), adjustments to each individual qubit x 806 and qubit y 808 (Figure 8) angular rotation is accomplished by varying the amplitude (V.sub.adj) of the radio frequency (RF) pulse signals f.sub.1, f.sub.2 via the respective networks of reactive electrical components  802-804b (Figure 8). 

Referring back to Figure 8, since the networks of reactive electrical components  802-804d each have an identical electrical configuration to the network depicted in Figure 5, the networks of reactive electrical components  802-804d accordingly provide such an adjustment by means of variable capacitor C.sub.adj. Thus, for each of the networks of reactive electrical components  802-804d, increasing the capacitance of the variable capacitor C.sub.adj increases the attenuation, while decreasing the capacitance of the variable capacitor C.sub.adj decreases the attenuation provided the network

The embodiment of Figure 8 may operate in two modes, whereby the quantum mechanical rotation of each  qubit x 806 and qubit y 808 is either controlled separately (mode 1) or undergoes quantum mechanical entanglement with the other qubit (mode 2). In mode 1, for examplequbit x 806 may receive RF pulse signal f.sub.1 (e.g., 5.00 Ghz), which is attenuated by the network of reactive electrical components  802. Since the frequency of RF pulse signal f.sub.1 substantially matches the resonance frequency of  qubit x 806, the  qubit x 806 undergoes a predefined change (e.g., .pi./2) in the linear combination of at least two quantummechanical eigenstates. However, since qubit y 808 has a resonant frequency that substantially matches the frequency of RF pulse signal f.sub.2 (e.g., 5.05 Ghz), based upon receiving attenuated RF pulse signal f.sub.1 from network 804c, the quantum mechanical eigenstate of qubit y 808 may remain substantially unchanged. Also in mode 1, for example, qubit y 808 may receive RF pulse signal f.sub.2 that is attenuated by network of reactive electrical components 804d. Since the frequency of RF pulse signal f.sub.2 substantially matches the resonance frequency of qubit y 808, the qubit y 808 undergoes a predefined change (e.g., .pi./2) in the linear combination of at least two quantummechanical eigenstates. However, since  qubit x 806 has a resonant frequency that substantially matches the frequency of RF pulse signal f.sub.1, based upon receiving attenuated RF pulse signal f.sub.2 from network 804b, the quantum mechanical eigenstate of  qubit x 806 may remain unchanged. Thus, by applying RF pulse signal f.sub.1 (e.g., 5.00 Ghz) to  qubit x 806 and applying RF pulse signal f.sub.2 (e.g., 5.05 Ghz) to qubit y 808, each qubit x 806 and quit y 808 eigenstate is individually controlled via an RF pulse signal that matchestheir individual resonant frequency

In mode 2, for examplequbit x 806 receives RF pulse signal f.sub.2 that is attenuated by network of reactive electrical components 804b. Qubit y 808 also receives RF pulse signal f.sub.2 that is attenuated by network of reactive electrical components 804d. Although received RF pulse signal f.sub.2 matches the resonant frequency of qubit y 808 and does not substantially match the resonance frequency of  qubit x 806, if the amplitude of RF pulse signal f.sub.2 is sufficient, some quantum entanglement occurs between  qubit x 806 and qubit y 808 via the reactive coupling element 810. In particular, the quantum mechanical eigenstate change experienced by qubit y 808 depends on the quantum mechanical eigenstate of  qubit x 806. Thus, the qubit x 806 and qubit y 808 are entangled. 

The quantum mechanical computer radio frequency (RF) signaling system 800 may be maintained at cryogenic temperatures below one hundred (100) millikelvins (mK) in order to maintain the quantum mechanical computer radio frequency (RF) signaling system 800 at superconducting temperatures. For example, the quantum mechanical computer radio frequency (RF) signaling system 800 may be cooled in a cryostat to a temperature of about 30 mK. 

Brief Description:

Figure 9 shows an exemplary two-dimensional array of qubits receiving RF pulse signals, according to one embodiment

Detailed Description:

Figure 9 shows an exemplary two-dimensional array of qubits 900 receiving RF pulse signals, according to one embodiment. Each of the dark circles denoted by numbers 1-5 represent a qubit. As depicted, each of the qubits are weakly coupled (i.e., reactively) by quantum communication (QC) links QCL. The QC linkslinks QCL may include superconductive electrical links formed from, for example, Aluminum (Al) or Niobium (Nb), whereby each QC link QCL may be reactively coupled (e.g., capacitively) to a qubit at each end. The QC linkslinks QCL provide a means for enabling quantum entanglement conditions between the qubits in the array. As shown, qubits 1 are coupled to transmission line TL1 and are driven at a frequency F1, qubits 2 are coupled to transmission line TL2 and are driven at a frequency F2, qubits 3 are coupled to transmission line TL3 and are driven at a frequency F3, qubits 4 are coupled to transmission line TL4 and are driven at a frequency F4, qubits 5 are coupled to transmission line TL5 and are driven at a frequency F5, and qubits 1 are coupled to transmission line TL’1 and are also driven at frequency F1. In a serpentine connection approach, transmission lines that carry the same frequency to the qubits may be connected together. For example, transmission lines TL1 and TL’1 are both driven at the same frequency F1, and are thus connected together, as indicated by dashed lineconnection 908. Although for illustrative brevity, only transmission lines TL1 and TL’1 are shown, additional transmission lines coupled to other qubits 1 in the exemplary two-dimensional array of qubits 900 would also follow a serpentine pattern of connections. The same rationale may be applied to transmission lines TL2-TL5. 

Alternatively, connections such as connection 908 may be omitted in favor of, for example, inductively coupling a frequency source to transmission lines being driven by the same frequencypulse signal. For example, transmission lines TL1 and TL’1 are both driven at the same frequency F1. Therefore, the RF output signal from a single RF source 904 (i.e., signal generator) may be inductively coupled to each of the transmission lines TL1, TL’1 that are driven at the same frequency F1. In particular, inductive coupling device a 904couples the RF output signal from RF source 904 to transmission lines TL1, while inductive coupling device b 906couples the RF output signal from RF source 904 to transmission lines TL’1. The same rationale may be applied to transmission lines TL2-TL5. The qubits 1-5 depicted in exemplary two-dimensional array of qubits 900 may include the same or similar circuitry for receiving an attenuated RF pulse signal as those corresponding to system 100 of Figure 1

The above exemplary two-dimensional array of qubits 900 of qubits 1-5 may be utilized to form, among other things, a surface code method of error prevention/correction using a discrete number of frequencies, pulse shapes, and phases. For example, one approach may contemplate using five (5) different qubit frequencies (i.e., F1-F5), and six (6) or more different pulses (e.g., pulse period, pulse interval, etc.) associated with each frequency. It may be appreciated that the depicted 2D mesh is exemplary. Thus, different lattices with different interconnections of the qubits and a different numbers of frequencies can be utilized. 

The two-dimensional array of qubits 900 may be maintained at cryogenic temperatures below one hundred (100) millikelvins (mK) in order to maintain the array 900 at superconducting temperatures. For example, the two-dimensional array of qubits 900 may be cooled in a cryostat to a temperature of about 30 mK. 

Although the exemplary embodiments described in the foregoing include networks of reactive components having capacitor devices, other reactive components such as inductors may also be utilized in order to provide a divider network capable of attenuating the received RF pulse signals in a controlled manner


Parts List

100

quantum mechanical computer radio frequency (RF) signaling system

102

transmission lines

104

control logic unit

106

electrical components a

108

Electrical components b

110

Electrical components c

112

Electrical components d

114

switch unit a

116

switch unit b

118

switch unit c

120

switch unit d

122

output-stage network of reactive electrical components a

124

output-stage network of reactive electrical components b

126

output-stage network of reactive electrical components c

128

output-stage network of reactive electrical components d

130

qubit a

132

qubit b

134

qubit c

136

qubit d

138

control output a

140

control output b

142

control output c

144

control output d

146

impedance matching resistor

200

item

300

item

400

item

500

item

600

item

700

item

800

quantum mechanical computer radio frequency (RF) signaling system

802

networks of reactive electrical components

804

links

806

qubit x

808

qubit y

810

reactive coupling element

812

switch unit a

900

exemplary two-dimensional array of qubits

902

RF source

904

inductive coupling device a

906

inductive coupling device b

908

connection


Terms/Definitions

logic circuitry

above array

flowchart or block diagrams

switch positions

outputted

oriented programming language

radio frequency (RF) pulse signals

quantum mechanical eigenstate change

state-setting data

transmission line

serpentine pattern

qubits 1

circuits

aluminum

first plurality

superconducting temperatures

0.1-10 femtofarads

RF output signal

electronic circuitry

reactive network switch control

magnetic storage device

pulse signals

Electrical components c

software

optical transmission fibers

reactance

qubit x

series

portable computer diskette

electronic storage device

local area network

factor

links

leakage

input capacitive reactive component

impedance matching resistor

switch unit

(mode

more different pulses

qubits 2

pi./2 quantum mechanical eigenstate change

state change

processor

given example

copper transmission cables

discrete number

QCSG

quantum entanglement

further propagate

relationship

suitable combination

wire

control output b

mechanical eigenstates

calibration

field-programmable gate arrays

similar programming languages

sub.2 propagating

turn

functionality

floppy disk

basis

computer program products

optical storage device

four

Quantum Computing Signal Generation

longer coherence times

machine instructions

qubit a

operational steps

special purpose hardware

output-stage network of reactive electrical components c

control logic unit

reactive component

links QCL

source code

method

millikelvins

segment

transistor

FET switch

controllable reactive component

implement aspects

flowchart illustration

storage

closing switch R.sub

particular attenuation factor

latter scenario

dark circles

inductors

SRAM

gate G

closed electrical circuit connection

resonance frequency

substantially identical qubits

reactive coupling element

actuating switches S

parallel capacitors

manufacturing tolerances

error prevention/correction

process flow

means

Aluminum (Al) or Niobium

networks of reactive electrical components

read-only memory

FPGA

qubit b

transmission lines

5.00 Ghz RF signal

open position

output-stage network of reactive electrical components a

ground

part

output-stage network of reactive electrical components b

similar circuitry

five

Field Effect

their respective quantum mechanical eigenstates

instance

example

illustrations and/or block diagrams

computer readable program instructions

inductive coupling device b

portable compact disc

one embodiment

switch unit c

frequencies

lines

two modes

mode

operations

punch-cards

system

groove

flowchart and block diagrams

connection

portion

R.sub.2

specified logical function(s)

reference

switch unit d

pulse period T.sub.pulse

microseconds

non-linear inductance LAO

electromagnetic waves

control output a

input terminal

open-circuit switch R.sub

quantum computing environment

attenuated version

attenuated RF pulse signal

pulse signal propagating

optionally provided plurality

attenuated radio frequency (RF) pulse signal

their individual resonant frequency

firmware instructions

R.sub.1

RF signal reflections

control voltage

code

illustrative brevity

radio frequency

frequency source

computing/processing devices

type

discrete capacitor divider networks

pulse interval time

predetermined rotation

serpentine connection approach

quantum

josephson junction

“C” programming language

machine

intervals

qubit y

pulse shapes

transmon

particular manner

capacitor

state information

RF source

FIGS

electrical signals

remote computer or server

special purpose computer

potential quantum state changes

routers

attenuated input RF pulse signal

apparatus

1 microsecond

blocks

leakage current

architecture

electromagnetic storage device

first switch configuration

resistive nature

mechanically encoded device

divider network

tangible device

matches

multiple capacitors C.sub.adj

implementations

calibration processes

superconductive electrical links

possible implementations

predefined change

absence

quantum mechanical computer radio frequency (RF) signaling system

transitory signals

requisite resolution

firewalls, switches, gateway computers

open switch R.sub

resonant frequency

closed position

invention

functions

foregoing describes

reactively coupled qubits

qubits 4

two blocks

predefined state change

controlled manner

-stage network

electrical components a

specified functions or acts

signal

FET switches

smalltalk

instruction execution device

stand-alone software package

network

phases

output

two different RF pulse signals

adjustments

functions/acts

internet

switch unit a

article

resonant circuit

output-stage network of reactive electrical components d

structures

non-exhaustive list

present invention

operation

fiber-optic cable

second plurality

relation

selected

firmware

capacitance value

edge servers

favor

lower frequency

parallel configured capacitive reactive components C.sub.adj

depicted output-stage network

cryostat

flowchart and/or block diagram block or blocks

qubit c

external storage device

oscillation frequency backwards and forwards

selected RF pulse

memory stick

depicted 2D mesh

control output d

their current quantum mechanical eigenstate

4 Ghz RF signal

order

unmatched attenuation factors

instructions

numbers

eigenstate change

dashed line

identical electrical configuration

special purpose hardware-based systems

electrical path

qubits 3

cryogenic temperatures

figures

surface code method

parallel configured capacitive reactive components C’.sub.adj

RF pulse sources

reactive network switch control unit

cryogenically

predefined quantum mechanical eigenstate change

quantum mechanical eigenstate

function/act

module

computer

QCSG Program

general purpose computer

RF pulse signal

20 nanosecond (ns) pulse period

capacitance C

Electrical components b

their current state

pulse interval

flowchart illustrations and/or block diagrams

qubit d

line

mitigates interactions

static random access memory

Source S

quantum entanglement conditions

electrical leakage

radio frequency (RF) pulse signal

computer instructions

adjustable reactance

pulse signal

block diagrams

transmitted

network switch control unit

parallel configuration

aspects

reactive components

remote computer

capacitor C.sub

radio waves

external computer

microcode

common RF pulse signal

quantum mechanical entanglement

wireless transmission

range

methods

hundred

total capacitance

array

computing/processing device

configuration

arrow A.sub

QC links

number

resonance

couples

reactive component tolerances

RF signal

quantum entanglement condition

second switch configuration

switch control unit

direction

two-dimensional array

angular rotation

random access memory

control

pulse period

wide area network

inductive coupling device a

difference

erasable programmable read-only memory

current quantum mechanical eigenstate

combination

block

storage medium

set frequency

mechanical computer radio frequency

metal-insulator-metal

succession

capacitor devices

resistive characteristics

embodiment

instruction-set-architecture

plurality

digital versatile disk

process

fact

transistor device

output terminal

two states

.pi./2 rotation

light pulses

object

logic arrays

propagate

expectation

hardware

quantum communication

predefined quantum

product

equation

input

points

following

wireless network

semiconductor storage device

embodiments

EPROM or Flash memory

5.05 Ghz RF signal

actuation

greater or lesser values

and network

attenuation factor variation

regard

drain D

program instructions

programmable data processing apparatus

predefined .pi

control output c

creation

Electrical components d

systems

adjacent qubits

Internet Service Provider

electrical leakage current

configures

computer program product

result

qubits 5

switch unit b

manufacturing process

quantum mechanical rotation

selected qubits

hard disk

frequency

more specific examples

unit

linear combination

operating

manufacture

FET device

waveguide

multiple transmission lines

foregoing

conventional procedural programming languages

temperature

aluminum, aluminum oxide

reverse order

e.g., capacitors and inductors

qubit n

Feature File Validation Tool


Drawings

Brief Description:

illustrates a system 100 for software development testing;

Detailed Description:

Figure 1 illustrates a system 100 for software development testing. As illustrated in Figure 1, system 100 may include one more device(s) 102, a network 106, a database 108, and a feature file validation tool 110. In particular embodiments, system 100 reduces the number of errors in computer software code

Device(s) 102 may be any devices that operate and/or communicate with other components of system 100. In general, device(s) 102request and receive processed data. For example, device(s) 102 may communicate a request to validate software code to feature file validation tool 110 or any other suitable component of system 100. In some embodiments, device(s) 102 may be associated with an enterprise or a business unit within an enterprise

This disclosure contemplates device(s) 102 being any appropriate device for sending and receiving communications over network 106. As an example and not by way of limitation, device(s) 102 may be a computer, a laptop, a wireless or cellular telephone, an electronic notebook, a personal digital assistant, a tablet, or any other device capable of receiving, processing, storing, and/or communicating information with other components of system 100. Device(s) 102 may also include a user interface, such as a display, a microphone, keypad, or other appropriate terminal equipment usable by user 104. In some embodiments, an application executed by device(s) 102 may perform the functions described herein. 

Network 106 facilitates communication between and amongst the various components of system 100. This disclosure contemplates network 106 being any suitable network operable to facilitate communication between the components of system 100. Network 106 may include any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. Network 106 may include all or a portion of a public switched telephone network (PSTN), a public or private data network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a local, regional, or global communication or computer network, such as the internet, a wireline or wireless network, an enterprise intranet, or any other suitable communication link, including combinations thereof, operable to facilitate communication between the components

In some embodiments, system 100 includes database 108. System 100 may include a single database 108 or any number of databases 108. Databases 108 may store software code that is in the software development process cycle. In some embodiments, database 108storestest casecase files for testing features of the software code. This disclosure does not limit the databases 108 to storing only software code and test casecase files. This disclosure contemplates databases 108 storing any suitable data type. For example, databases 108 may store any type of data to be processed. 

Feature file validation tool 110 generates test case details files for testing the functionality of software code stored in database 108. As illustrated in Figure 1, feature file validation tool 110 includes a processor(s) 110 and a memory 112. This disclosure contemplates processor(s) 110 and memory 112 being configured to perform any of the operations of feature file validation tool 110 described herein. In particular embodiments, feature file validation tool 110 reduces the number of errors in computer software code

Processor(s) 110 is any electronic circuitry, including, but not limited to microprocessors, application specific integrated circuits (ASIC), application specific instruction set processor (ASIP), and/or state machines, that communicatively couples to memory 112 and controls the operation of feature file validation tool 110. Processor(s) 110 may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. Processor(s) 110 may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory 112 and executes them by directing the coordinated operations of the ALU, registers and other components. Processor(s) 110 may include other hardware and software that operates to control and process information. Processor(s) 110 executes software stored on memory 112 to perform any of the functions described herein. Processor(s) 110 controls the operation and administration of feature file validation tool 110 by processinginformation received from network 106, device(s) 102, and memory 112. Processor(s) 110 may be a programmable logic device, a microcontroller, a microprocessor, any suitable processing device, or any suitable combination of the preceding. Processor(s) 110 is not limited to a single processing device and may encompass multiple processing devices

Memory 112 may store, either permanently or temporarily, data, operational software, or other information for processor(s) 110. Memory 112 may include any one or a combination of volatile or non-volatile local or remote devices suitable for storing information. For example, memory 112 may include random access memory (RAM), read only memory (ROM), magnetic storage devices, optical storage devices, or any other suitable information storage device or a combination of these devices. The software represents any suitable set of instructions, logic, or code embodied in a computer-readable storage medium. For example, the software may be embodied in memory 112, a disk, a CD, or a flash drive. In particular embodiments, the software may include an application executable by processor(s) 110 to perform one or more of the functions described herein. In some embodiments, memory 112stores a step definition file 114. Step definition file 114 generally includes a catalog of predetermine valid steps, as discussed in more detail in relation to feature file validation tool 110. This disclosure contemplates memory 112 storing any of the elements stored in databases 108 and/or by feature file validation tool 110. 

In an exemplary embodiment, feature file validation tool 110 receives a request 116. In some embodiments, request 116 is a request to configure a test case file(s) 124. As previously discussed, test case file(s) 124 may include one or more tests for determining whether a functionality of a computer software application is functioning properly. In some embodiments, a test includes a plurality of steps, as previously discussed. In some embodiments, test case file(s) 124 is an XML file written in gherkin language. Test case file(s) 124 may not be compatible with certain software tools. For example, test case file(s) 124 may not include fields required for one or more software tools. Request 116 may be a request to configure test case file(s) 124 to include information required to utilize one or more software tools with test case file(s) 124. In some embodiments, user 104 uses device(s) 102 to generate request 116. User 104 may utilize a graphical user interface to enter information to generate request 116. In some embodiments, the graphical user interface is included in a standalone tool that is system agnostic. In some embodiments, the standalone is designed in a generic way that may be used on a plurality of projects and a plurality of test case file(s) 124

Feature file validation tool 110 may analyze request 116 to determine a test case file(s) 124, field name(s) 118, and field value(s) 120. In some embodiments, request 116 may include a description of test case file(s) 124. Feature file validation tool 110 may retrieve test case file(s) 124 from database 108. Field name(s) 118 generally facilitates identifying test case file(s) 124 and/or information within test case file(s) 124. For example, field name(s) 118 may include an application field name(s) 118, a project field name(s) 118, a type field name(s) 118, and/or test field name(s) 118. In some embodiments, each field name(s) 118 corresponds to a field value(s) 120. Field value(s) 120 generally provides information for test case file(s) 124. For example, field value(s) 120 may include the name of the application that test case file(s) 124 is testing (corresponding to application field name(s) 118), a project field value(s) 120 (corresponding to project namefield 120), and any other suitable field value(s) 120 that corresponds to a field name(s) 118. In some embodiments, test case file(s) 124 may not include field name(s) 118 and/or field value(s) 120. In some embodiments, test case file(s) 124 must include field name(s) 118 and/or field value(s) 120 to be compatible with certain software tools

In some embodiments, feature file validation tool 110 generates first test case details file 122 using test case file(s) 124, field name(s) 118, and field value(s) 120. For example, feature file validation tool 110 may transform test case file(s) 124 to include field name(s) 118 and their corresponding field value(s) 120. Field name(s) 118 and field value(s) 120 may apply to test case file(s) 124 as a whole, to one or more tests within test case file(s) 124, and/or to one or more steps of the one or more tests within test case file(s) 124

In some embodiments, feature file validation tool 110 performs validation 126 on first test case details file 122. Validation 126 may determine whether first test case details file 122 includes computer language errors. In some embodiments, first test case details file 122 includes gherkin language. Gherkin computer language, as with any computer language, requires that code be written in a particular format. Feature file validation tool 110 may determine that first test case details file 122complies with gherkin language formatting. For example, feature file validation tool 110 may apply computer language rules to first test case details file 122. Computer language rules may be rules that include requirements of the computer language. As another example, feature file validation tool 110 may determine whether first test case details file 122 includes errors such as typos. This disclosure contemplates first test case details file 122 written in any suitable computer language and feature file validation tool 110 determining whether first test case details file 122conforms to requirements for the computer language

Feature file validation tool 110 may validate steps in first test case details file 122. As previously discussed, first test case details file 122 may include a plurality of tests to determine whether a software application is functioning properly given certain inputs and/or commands. Each test may include a plurality of steps. For example, a test may determine whether a calculator functionality for a software application is functioning properly. A step may include a command to launch the calculator functionality of the software application. Another step may provide inputs to the calculator software application. For example, the inputs may be 1 and 2. A step may include a command to multiply the inputs. A test may include an expected output. In this example, the expected output may be 2. In some embodiments, a test facilitates determining whether a software application is functioning properly by comparing an output of the software application to an expected output. In some embodiments, a software tool may accept only predefined steps. These predefined steps may be stored in step definition file 114. Step definition file 114 includes a catalog of predetermined valid steps, in some embodiments. Feature file validation tool 110 may compare steps in first test case details file 122 to predetermined steps to perform validation 126

Feature file validation tool 110 completes transformation 128 to transform first test case details file 122 to second test case details file 130, in some embodiments. As discussed, a software development and testing team may utilize a plurality of software tools to facilitate testing software application features in a design phase of a software application. For example, a software development and testing team may use a first software tool to generate and execute test case files. As another example, a software development and testing team may utilize a second software tool to manage the development lifecycle process. In this example, the second software tool may store a summary of errors identified by executing test case files. As another example, the second software tool may store information for development progress and/or any other software application development cycle management information. Two of more of the software tools may require test case file(s) 124 to be in a different format. First test case details file 122 may be compatible with a first software tool. For example, first test case details file 122 may be in an XML format. Second test case details file 130 may be compatible with a second software tool. For example, second test case details file 130 may be in an excel format

In some embodiments, feature file validation tool 110 generates test identification (“IDs”) 134 for the second test case details file 130. In some embodiments, a test ID(s) 132 may identify a test in second test case details file 130. In some embodiments, test ID(s) 132 may identify a step of a test in second test case details 132. Test ID(s) 132 may include an alphabetical indicator, a numerical indicator, a special character indicator, and/or any other suitable type of indicator. Feature file validation tool 110 may revise second test case details file 130 to include test ID(s) 132. A software development and test team may utilize test ID(s) 132 to uniquely identify a test and/or a step within, e.g., second test case details file 130. As discussed in more detail below, test ID(s) 132 may facilitate linking first test case details file 122 and second test case details file 130

In some embodiments, feature file validation tool 110 completes synchronization 134 using first test case details file 122 and test ID(s) 132. A test case ID 134 is a unique identifier for a test and/or a step. As discussed, first test case details file 122 and second test case details file 130 may include the same or substantially the same tests and steps. Thus, if a test and/or a step in second test case details file 130 receives a test ID(s) 132, feature file validation tool 110 revises first test case details file 122 so that the corresponding test and/or step in first test case details file 122 receives the same or substantially the same test ID(s) 132. This allows tests and/or steps in first test case details file 122 and second test case details file 130 to be linked. This facilitates the software development cycle by allowing information for a test and/or step to be updated between test case details file

Modifications, additions, or omissions may be made to system 100 without departing from the scope of the invention. For example, system 100 may include any number of processor(s) 110, memory 112, device(s) 102, and/or databases 108. While discussed as generating first test case details file 122 and second test case details file 130, this disclosure contemplates generating any suitable number of test case details files. Furthermore, the components of system 100 may be integrated or separated. For example, in particular implementations, memory 112 may be separated into multiple memory 112 to store the data descried herein. 

Brief Description:

illustrates the feature file validation tool of the system of Figure 1;

Detailed Description:

Figure 2 illustrates the feature file validation tool 110 of the system 100 of Figure 1. As illustrated in Figure 2, feature file validation tool 110 includes a retrieval engine 202, a configuration engine 204, a validation engine 206, a transformation engine 208, and a synchronization engine 210. In particular embodiments, feature file validation tool 110 reduces errors in computer software by facilitating software application feature testing

Retrieval engine 202 receives request 116 and test case file(s) 124, in some embodiments. In particular embodiments, retrieval engine 202 receives request 116 from one or more device(s) 102. For example, user 104 may generate request 116 using a graphical user interface of device(s) 102. In some embodiments, request 116 is a request to configure test case file(s) 124. As previously, discussed, test case file(s) 124 may include a plurality of tests and each test may include a plurality of steps. In some embodiments, each test determines whether a functionality of a software application is operating properly. In some embodiments, retrieval engine 202 receives test case file(s) 124 from database 108. An example algorithm for retrieval engine 200 is as follows: wait for request 116; receive request 116 from one or more device(s) 102; in response to receiving request 116, retrieve test case file(s) 124 from database 108; receive test case file(s) 124; and communicate request 116 and test case file(s) 124 to configuration engine 204

Configuration engine 204 determines field name(s) 118 and field value(s) 120 and generates first test case details file 122 using test case file(s) 124, field name(s) 118, and field value(s) 120, in some embodiments. Configuration engine 204 may extract field name(s) 118 and field value(s) 120 from request 116 in some embodiments. In some embodiments field names 122 include an application field name(s) 118 and a project field name 122. In this example, field value(s) 120 may include a field value(s) 120 for the application field name that indicates the software application being tested. Field value(s) 120 may include a field value(s) 120 for the project field name that indicates a software development project. User 104 may enter field name(s) 118 and field value(s) 120 using a graphical user interface of device(s) 102. Configuration engine 204 may use field name(s) 118 and field value(s) 120 to generate first test case details file 122. In some embodiments, configuration engine 204 revises test case file(s) 124 to include field name(s) 118 and field value(s) 120 to generate test case details file 124. Configuration engine 204 may communicate first test case details file 122 to validation engine 206, transformation engine 208, and/or synchronization engine 210. An example algorithm for configuration engine 204 generating first test case details file 122 is as follows: receive request 116  test case file(s) 124; analyze request 116 to determinefield name(s) 118 and field value(s) 120; generate first test case details file 122 using test case file(s) 124, field name(s) 118, and field value(s) 120; and communicate first test case details file 122 to validation engine 206

Validation engine 206performs validation 126, in some embodiments. As previously discussed, validation 126 may include a two-part validation. For example, validation 126 may determine whether the computer language of test case file(s) 124conforms to the rules associated with the computer language. In this example, feature validation engine 206 may apply computer language rules to all or a part of first test case details file 122 to determine whether it conforms to the computer language. Computer language rules may include rules for the computer language requirements. As another example, validation engine 206 determines whether each step in the first test case details file 122 is valid by comparing the steps to one or more steps in step definition file 114. Step definition file 114 may include a catalog of predetermine valid steps. Validation engine 206 may retrieve step definition file 114 from database 108, memory 112, and/or any other suitable component of system 100. In some embodiments, database 108 communicates step definition file 114 to feature file validation tool 110 where it is stored in memory 112. In some embodiments, user 104 generates step definition file 114. If validation engine 206 determines that first test case details file 122 does not conform to computer language rules and/or that a step is not valid, validation engine 206 may generate a warning to user 104, in some embodiments. An example algorithm for validation engine 206 to perform validation 126 is as follows: receive first test case details file 122 from configuration engine 204; determine whether computer language in first test case details file 122conforms to rules for the computer language; and determine whether each step in first test case details file 122 is valid. 

Transformation engine 208performstransformation 128 to generate second test case details file 130 in some embodiments. Transformation engine 208 may transform first test case details file 122 from a first file format to a second file format to perform transformation 128. In some embodiments, second test case details file 130 includes the plurality of tests, the plurality of field name(s) 118, and the plurality of field value(s) 120 from first test case details file 122. In some embodiments, first test case details file 122 is an XML file format and transformation engine 208performstransformation 128 to generate second test case details file 130 in an excel file format. An example algorithm for transformation engine 208 to perform transformation 128 to generate second test case details file 130 is as follows: receive first test case details file 122; perform transformation 128 to generate first test case details file 122 to second test case details file 130; communicate second test case details file 130 to synchronization engine 210

Synchronization engine 210 generates test ID(s) 132 for each of the one or more of the tests and/or steps in second test case details file 130and performs synchronization 134, in some embodiments. As previously discussed, test ID(s) 132 are unique identifiers for a test and/or step in second test case details file 130. Synchronization engine 210 may perform synchronization 134 in some embodiments. For example, synchronization engine 210 may revise first test case details file 122 to include test ID(s) 132. In some embodiments, first test case details file 122 and/or second test case details file 130 include the same or substantially the same tests and/or steps. If a test and/or a step in second test case details file 130 is linked to a test ID(s) 132, synchronization engine 210 revises first test case details file 122 such that the corresponding test and/or step in first test case details file 122 is linked to the same test ID(s) 132, in some embodiments. An example algorithm for synchronization engine 210 is as follows: receive second test case details file 130; generate test ID(s) 132 for tests and/or steps in second test case details file 130; perform synchronization 134 to revise first test case details file 122 to include test ID(s) 132

Modifications, additions, or omissions may be made to feature file validation tool 110 without departing from the scope of the invention. For example, while discussed as retrieval engine 202 receiving a single test case file(s) 124, retrieval engine 202 may receive any number of test case file(s) 124. Feature file validation tool 110 may generate a single first test case details file 122 from a plurality of test case file(s) 124. As another example, feature file validation tool 110 may generate a first test case details file 122 for each received test case file(s) 124. In some embodiments, user 104 may instruct feature file validation tool 110 to determine a single first test case details file 122 from a plurality of test case file(s) 124 using a graphical user interface of device(s) 102. As another example, validation engine 206 may perform validation 126 on test case file(s) 124, first test case details file 122, and/or second test case details file 130

Brief Description:

is a flowchart illustrating a method for software development testing and management using the system of Figure 1

Detailed Description:

Figure 3 is a flowchart illustrating an examplemethod 300 for software development testing and management using the system of Figure 1. In particular embodiments, feature file validation tool 110 performs  method 300. By performing method 300, feature file validation tool 110 reduces the number of errors in computer software application development

In some embodiments, feature file validation tool 110 begins by receiving request 116 to configure test case file(s) 124 in step 302. In step 304, feature file validation tool 110 may retrieve test case file(s) 124 in response to request 116. Feature file validation tool 110 receives field name(s) 118 and field value(s) 120 in step 306, in some embodiments. For example, feature file validation tool 110 may extract field name(s) 118 and/or field value(s) 120 from request 116. Feature file validation tool 110 generates first test case details file 122 using test case file(s) 124, field name(s) 118, and field value(s) 120 in step 308, in some embodiments

Feature file validation tool 110 may determine whether the steps in first test case details file 122conform with step rules in step 310. For example, feature file validation tool 110 may compare each step in first test case details file 122 to a catalog of predetermined valid steps. If the steps do not conform to predetermined valid steps in step 310, feature file validation tool 110 may generate a warning in step 312 before returning to step 310. If the steps do conform to predetermined valid steps in step 310, the method proceeds to step 314 where feature file validation tool 110 determines whether the computer language in first test case details file 122conforms to a computer language by applying computer language rules. If the computer language does not conform, the method proceeds to step 316 where feature file validation tool 110 generates a warning before proceeding to step 314, in some embodiments

If the computer language does conform, the method proceeds to step 318 where feature file validation tool 110 generates second test case details file 130, in some embodiments. Feature file validation tool 110 may generate test ID(s) 132 at step 320 and perform synchronization 134 at step 322. For example, feature file validation tool 110 may transform first test case details file 122 to include test ID(s) 132 as previously discussed. 

Modifications, additions, or omissions may be made to method 300 depicted in Figure 3. Method 300 may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While discussed as feature file validation tool 110 performing the steps, any suitable component of system 100, such as device(s) 102 for example, may perform one or more steps of the method

Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDS), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. 

Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context


Parts List

100

system

102

device(s)

104

user

106

network

108

database

110

processor(s)

112

memory

114

step definition file

116

request

118

field name(s)

120

field value(s)

122

first test case details file

124

test case file(s)

126

validation

128

transformation

130

second test case details file

132

test ID(s)

134

synchronization

200

feature file validation tool

202

retrieval engine

204

configuration engine

206

validation engine

208

transformation engine

210

synchronization engine

300

method

302

block

304

block

306

block

308

block

310

decision block

312

block

314

decision block

316

block

318

block

320

block

322

block


Terms/Definitions

feature file errors

second file

electronic notebook

omissions

multiple processing devices

state machines

second file format

floppy disk drives

substitutions

software development testing and management

warning message

other hardware and software

example embodiments

test

appended claims

fetches instructions

generic way

software computer program

microphone

electronic circuitry

configuration engine

processor registers

functionality

hard disk drives

received field names

yet another example

application field name

embodiments field names

network

summary

feature files

processor

test case file(s)

Figure 2

their corresponding field values

suitable data type

generate test IDs

limitation

descriptions

testing software application features

laptop

name

computer software application

preceding

suitable computer language and feature file validation tool

landing page

design stage

computer language errors

software tools

development progress

device(s)

software development process cycle

flowchart

gherkin computer language

computer network

application specific instruction set processor

validation

test case

wireline

communicatively couples

suitable combination

solid-state drives

other information

fields

numerals

code

time feature file validation

figures

predetermined valid steps

embodiments

elements

software development feature validation

synchronization

software application feature testing

gherkin language formatting

operative

performs validation

field-programmable gate arrays

signals

development lifecycle process

appropriate device

inputs

enterprise intranet

optical discs

file validation tool

user interface

excel file format

welcome screen

FPGAs

suitable network

hybrid hard drives

step

requirements

test file formats

first test case

apparatus or system

response

and performs synchronization

values

feature

global communication

second test case details file

ASICs

computer language

computer

microcontroller

excel format

management tool

testing team

test ID(s)

test field name

non-transitory computer-readable medium

software code testing and management

interconnecting system

certain functionality

combination or permutation

embodiments, test ID

accompanying drawings

database

transformation engine

existing tool

changes

test identification

portion

conjunction

PSTN

first software tool

generation engine

user

received field values

type field name

field

display

components

claim

communications

suitable order

invention

flash drive

retrieval engine

valid, validation engine

certain inputs and/or commands

calculator software application

drawings

memory requirements

Figure 3

gherkin language

username and password

validate software code

feature file

software tool

system

software developers

testing features

same tests and/or steps

indicator

following description

ALU and store

tool

software

project field value

output

testing

Extensible Markup Language

software development

RAM-drives

single database

method

drives

different format

cellular telephone

test steps

optical disc drives

XML file format

suitable computer language

other steps

only predefined steps

names

certain software tools

project field name

parts

error

validation engine

step definition file

definition file

suitable document format

different formats

magneto-optical discs

HHDs

part

number

completes synchronization

tests

data, operational software

conforms

test case file

software development cycle

performs

embodiment

existing tools

private data network

test and/or step

other suitable type

file conforms

computer software

one or more steps

suitable number

operations

application

computer software code

instructions

computer language rules

predetermine valid steps

complies

numerical indicator

operation and administration

other suitable architecture

various drawings

password

alphabetical indicator

context

internet

suitable set

erroneous execution

programmable logic device

control unit

example

synchronization engine

file

step rules

messages

particular implementations

second test case

scope

business unit

single test case file

predefined steps

design phase

computer software code and/or feature files

two-part validation

one or more processing units

predetermined steps

warning

more complete understanding

one or more software tools

test ID

computer software application development

special character indicator

person

method claim

identifying test case file

metropolitan area network

random access memory

Figure 1

relation

expected output

HDDs

variations

databases

operands

catalog

claims

single processing device

information

field value(s)

tests and/or steps

first computer language

one or more tests

disk

description

arithmetic logic unit

XML file

conform

ALU operations

unique identifiers

first file format

magnetic storage devices

type

received test case file

application specific integrated circuits (ASIC)

same tests and steps

feature file validation tool

combination(s)

other appropriate terminal equipment

computer-readable storage medium

request

case files

Extensible Markup Language format

typos

wide area network

disclosure

rules

particular format

processor(s)

remote devices

ordinary skill

suitable processing device

computer-readable non-transitory storage medium

reference

steps

graphical user interface

particular function

present disclosure

corresponding test and/or step

ASIP

first test case details file

field values

more detail

computer-readable non-transitory storage medium or media

arithmetic and logic operations

field names

command

multiple memories

wireless network

feature validation engine

enterprise

software development and test team

processing

keypad

calculator functionality

XML format

audio

floppy diskettes

stores

local area network

example algorithm

magnetic tapes

format

execution

field name(s)

standalone tool

standalone

other suitable field value

modifications

projects

application-specific ICs

various components

FDDs

software code

value

magneto-optical drives

microprocessors

files

functions

telephone network

personal digital assistant

software application

that particular function

other suitable communication link

software code development cycle

other suitable information storage device

username

second software tool

processed data

microprocessor

communication(s)

feature file’s

website landing page

determination

processor usage

data

software development testing

operation

computer processing

coordinated operations

apparatus

plurality

computer language requirements

test results

alterations

results

tablet

logic

suitable component

errors

optical storage devices

video

memory

transformation

software development project

registers

additions

component

several steps

determine

unique identifier

SSDs

Portable Multifunction Device


Drawings

Brief Description:

Figure 1 illustrates a portable multifunction device having a touch screen in accordance with some embodiments.

Detailed Description:

Figure 1 illustrates a portable multifunction device 100 having a touch screen 120 in accordance with some embodiments. The touch screen optionally displays one or more graphics within user interface 102 (UI). In this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 104 (not drawn to scale in the figure) or one or more styluses 106 (not drawn to scale in the figure). In some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward) and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with portable multifunction device 100. In some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. 

Device 100 optionally also includes one or more physical buttons, such as “home” or menu button 108. As described previously, menu button 108 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally executed on device 100. In some embodiments, the menu button 108 includes a fingerprint sensor that identifies a fingerprint on the menu button 108. The fingerprint sensor is optionally used to determine whether a finger on the menu button 108 has a fingerprint that matches a fingerprint used to unlock the device 100. Alternatively, in some embodiments, the menu button is implemented as a soft key in a GUI displayed on touch screen 120

In one embodiment, device 100 includes touch screen 120, menu button 108, push button 110 for powering the device on/off and locking the device, volume adjustment button(s) 120, 114 114, head set jack 116, and docking/charging external port 118. Push button 110 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 132. Device 100 also, optionally, includes one or more contact intensity sensors 128 for detecting intensity of contacts on touch screen 120 and/or one or more tactile output generators 130 for generating tactile outputs for a user of device 100. 


Parts List

100

portable multifunction device

102

user interface

104

one or more fingers

106

one or more styluses

108

menu button

110

push button

112

volume adjustment button(s)

114

Subscriber Identity Module (SIM) card slot

116

head set jack

118

external port

120

touch screen

122

speaker

124

optical sensor

126

proximity sensor

128

one or more contact intensity sensors

130

one or more tactile output generators

132

microphone

134

Accelerometer(s)


Terms/Definitions

one or more graphics

one or more contact intensity sensors

unlock process

power

contact

one or more physical buttons

rolling

embodiment

external port

device

touch screen

Subscriber Identity Module (SIM) card slot

user

soft key

one or more swipes

Subscriber Identity Module

corresponding application

fingerprint sensor

button

Accelerometer(s)

inadvertent contact

verbal input

detecting intensity

portable multifunction device

one or more styluses

right

one or more taps

fingerprint

head set jack

tactile outputs

others

volume adjustment button(s)

gesture

one embodiment

alternative embodiment

depressed state

applications

one or more tactile output generators

microphone

selection

figure

finger

implementations or circumstances

embodiments

swipe gesture

activation or deactivation

graphics

example

user interface

push button

application

screen

menu button

contacts

application icon

predefined time interval

one or more fingers

functions

Example of Computer System Architecture


Drawings

Brief Description:

depicts an illustrative computer system architecture that may be used in accordance with one or more illustrative aspects described herein. 

Detailed Description:

Figure 1 illustrates one example of a system architecture and data processing device that may be used to implement one or more illustrative aspects described herein in a standalone and/or networked environment. Various network nodesdata server 110, web server 106, computer 104, and laptop 102 may be interconnected via a wide area network 108 (WAN), such as the internet. Other networks may also or alternatively be used, including private intranets, corporate networks, LANs, metropolitan area networks (MANs) wireless networks, personal networks (PANs), and the like. Network 108 is for illustration purposes and may be replaced with fewer or additional computer networks. A local area network (LAN) may have one or more of any known LAN topology and may use one or more of a variety of different protocols, such as ethernet. Devicesdata server 110, web server 106, computer 104, laptop 102 and other devices (not shown) may be connected to one or more of the networks via twisted pair wires, coaxial cable, fiber optics, radio waves or other communication media

The term “network” as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term “network” includes not only a “physical network” but also a “content network,” which is comprised of the data–attributable to a single entity–which resides across all physical networks

The components may include data server 110, web server 106, and client computer 104, laptop 102. Data server 110 provides overall access, control and administration of databases and control software for performing one or more illustrative aspects described herein. Data serverdata server 110 may be connected to web server 106 through which users interact with and obtain data as requested. Alternatively, data server 110 may act as a web server itself and be directly connected to the internet. Data server 110 may be connected to web server 106 through the network 108 (e.g., the internet), via direct or indirect connection, or via some other network. Users may interact with the data server 110 using remote computer 104, laptop 102, e.g., using a web browser to connect to the data server 110 via one or more externally exposed web sites hosted by web server 106. Client computer 104, laptop 102 may be used in concert with data server 110 to access data stored therein, or may be used for other purposes. For example, from client computer 104, a user may access web server 106 using an internet browser, as is known in the art, or by executing a software application that communicates with web server 106 and/or data server 110 over a computer network (such as the internet). 

servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines. Figure 1 illustrates just one example of a network architecture that may be used, and those of skill in the art will appreciate that the specific network architecture and data processing devices used may vary, and are secondary to the functionality that they provide, as further described herein. For example, services provided by web server 106 and data server 110 may be combined on a single server

Each componentdata server 110, web server 106, computer 104, laptop 102 may be any type of known computer, server, or data processing device. Data server 110, e.g., may include a processor 112 controlling overall operation of the data server 110. Data server 110 may further include RAM 116, ROM 118, network interface 114, input/output interfaces 120 (e.g., keyboard, mouse, display, printer, etc.), and memory 122. Input/output interfaces 120 may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. Memory 122 may further store operating system software 124 for controlling overall operation of the data server 110, control logic 126 for instructing data server 110 to perform aspects described herein, and other application software 128 providing secondary, support, and/or other functionality which may or may not be used in conjunction with aspects described herein. The control logic may also be referred to herein as the data server softwarecontrol logic 126. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.). 

memory 122 may also store data used in performance of one or more aspects described herein, including a first database 132 and a second database 130. In some embodiments, the first database may include the second database (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. Web server 106, computer 104, laptop 102 may have similar or different architecture as described with respect to data server 110. Those of skill in the art will appreciate that the functionality of data server 110 (or web server 106, computer 104, laptop 102) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QoS), etc. 

One or more aspects may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a nonvolatile storage device. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various transmission (non-storage) media representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space). various aspects described herein may be embodied as a method, a data processing system, or a computer program product. Therefore, various functionalities may be embodied in whole or in part in software, firmware and/or hardware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects described herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. 


Parts List

102

laptop

104

computer

106

web server

108

network

110

data server

112

processor

114

network interface

116

RAM

118

ROM

120

input/output interfaces

122

memory

124

operating system software

126

control logic

128

other application software

130

second database

132

first database


Terms/Definitions

modules

databases and control software

nonvolatile storage device

method

servers and applications

server

data server software

user input

time

additional computer networks

second database

physical networks

devices

separate physical machines

hardware

software application

similar or different architecture

networks

various functionalities

overall operation

memory

system design

illustrative computer system architecture

client computers

data processing device

particular tasks

form

electromagnetic waves

radio waves

data processing system

scope

non-storage

routines

aspects

user

information

gate arrays

computer network

aka, remote desktop

firmware and/or hardware or hardware equivalents

users

scripting language

cloud-based environments

rules

source

various network nodes

computer

suitable computer

data server

web server

processor

geographic location

various transmission

remote-access

separate virtual or logical addresses

storage capability

reading

functionality

laptop

illustration purposes

combination

source code programming language

optical fibers

networked environment

one or more communication paths

computer-executable instructions

MANs

hard disks

overall access

input/output interfaces

magnetic storage devices

system architecture and data processing device

indirect connection

internet

instructing data server

internet browser

known LAN topology

specific network architecture

various aspects

ROM

same physical machines

other functionality

CD-ROMs, optical storage devices

conjunction

instructions

remote computers

single entity–which resides

such data structures

network

quality

program modules

physical databases

control logic

computer software

user access level

one or more program modules

LANs

first database

one or more illustrative aspects

one or more aspects

variety

mouse

execution

control and administration

single database

automatic processing

storage media

operations or decisions

particular data structures

device

logic

metropolitan area networks

other purposes

data processing devices

particular abstract data types

skill

client device

components

queries

FPGA

RAM

input

different system environments

secondary, support

writing

PANs

network architecture

network interface

other networks

keyboard

services

operating system software

systems

such systems

addition

programs

just one example

embodiments

processing load

personal networks

corporate networks

separate table

part

component

metal wires

wide area network (WAN)

Computing Architecture

twisted pair wires

example

fiber optics

access data

one or more externally exposed web sites

medium

interface units and drives

single server

printer

multiple computers

drawings

integrated circuits, field

ethernet

other communication media

local area network

web browser

HTML or XML

standalone

other application software

wireless transmission media

other device

computer program product

private intranets

performance

one or more computers

data

software

stand-alone devices

signal-conducting media

objects

destination

computer-usable data

data updates

system

air and/or space

multiple data processing devices

service

data structures

other devices

type

different protocols

remote storage devices

coaxial cable

concert

transactions

Performing Autonomous Path Navigation Using Deep Neural Networks


Drawings

Brief Description:

Figure 1 illustrates a flowchart of a method for performing autonomous path navigation using deep neural networks, in accordance with one embodiment;

Detailed Description:

Figure 1 illustrates a flowchart of a method for performing autonomous path navigation using deep neural networks 100 for performing autonomous path navigation using deep neural networks, in accordance with one embodiment. As shown in operation 102, image data is received at a deep neural network (DNN) 102. In one embodiment, the image data may include a pictorial image. In another embodiment, the image data may include a plurality of pictorial images. In yet another embodiment, the image data may be derived from video data (e.g., streaming video, etc.). 

Additionally, in one embodiment, the image data may includeoptical data, infrared data, light detection and ranging (LIDAR) data, radar data, depth data, sonar data, etc. In another embodiment, the image data may includestereo image data received from a plurality of imaging devices. In yet another embodiment, the image data may be received from one or more imaging devices. For example, the image data may be received from a digital imaging camera, a radar device, a LIDAR device, an infrared imaging device, a sonar imaging device, etc. 

Further, in one embodiment, the image data may be received in real-time from the one or more imaging devices. In another embodiment, the image data may be received from one or more hardwareimaging devices, utilizing middleware (e.g., a ROS, etc.). In yet another embodiment, the image data may be received at a vehicle that includes the one or more imaging devices. For example, the vehicle may include any controlled mobile object, such as an automobile, airplane, amphibious vehicle (e.g., a boat, hydroplane, etc.) drone, micro aerial vehicle (MAV), rover, etc. In another example, the middleware may be running on hardware installed within the vehicle. In yet another example, the one or more cameras may be installed within the vehicle

Further still, in one embodiment, the image data may indicate a current location of the vehicle on a path. For example, the one or more imaging devices may be mounted on the vehicle such that the image data created by the one or more imaging devices indicates the current location of the vehicle on the path. In another embodiment, the DNN may include a supervised classification network

For example, the image data may includesupervised data that is correctly labeled with an associated position. In another example, the DNN may be trained with image data having associated correct labels. In another embodiment, the DNN may implement a loss function. For example, the DNN may determine a label for the image data, compare the label to the associated correct label for the image data, and may compute a difference between the labels. In another example, the DNN may be adjusted, based on the difference between the labels, using back propagation

In this way, the DNN may be more likely to correctly label image data during subsequent iterations

Also, as shown in operation 104, the DNN determines both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data 104. In one embodiment, the path may include any course that may be identified visually within the image data. In another embodiment, the path may include a trail, one or more tracks (e.g., train tracks, tire tracks, etc.), one or more power lines, a street, a culvert (e.g., a sewer culvert, etc.), an urban canyon, etc. 

In addition, in one embodiment, the vehicle may include the vehicle running middleware and the one or more imaging devices. In another embodiment, the vehicle may currently be in motion (e.g., along the path, etc.). In yet another embodiment, the orientation with respect to the path may include a plurality of probabilities. For example, the orientation with respect to the path may include a probability that a vehicle is currently facing left with respect to the path, a probability that a vehicle is currently facing right with respect to the path, and a probability that a vehicle is currently facing straight with respect to the path. In another example, each of the plurality of possibilities may be represented numerically, resulting in three numbers output by the DNN that indicate the orientation with respect to the path

Furthermore, in one embodiment, the lateral position with respect to the path may include a plurality of probabilities. In another embodiment, the lateral position with respect to the path may identify a probability of a plurality of lateral offsets with respect to the pathcenter. For example, the lateral position with respect to the path may include a probability that a vehicle is currently shifted left with respect to the path, a probability that a vehicle is currently shifted right with respect to the path, and a probability that a vehicle is centered with respect to the path. In another example, each of the plurality of possibilities may be represented numerically, resulting in three numbers output by the DNN that indicate the lateral position with respect to the path

Further still, in one embodiment, the orientation and lateral position may be determined in real-time within the vehicle. In another embodiment, the image data may be sent from the vehicle to a remote location (e.g., a distributed computing environment), and the orientation and lateral position may be determined at the remote location

In this way, both rotation and translation data may be determined by the DNN, utilizing the image data

Also, as shown in operation 106, a location of the vehicle is controlled, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path 106. In one embodiment, the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path may be converted into steering directions. For example, the conversion may be performed by a controller module. In another example, the steering directions may include one or more steering signals (e.g., a steering angle, etc.). 

Additionally, in one embodiment, the steering directions may be converted into another protocol to create convertedsteering directions. For example, the steering directions may be converted from a middlewareprotocol (e.g., a ROS protocol, etc.) to a vehicle control protocol. In another example, the conversion may be performed by a communication module

Furthermore, in one embodiment, the convertedsteering directions may be sent to a vehicle systems module to control one or more steering mechanisms of the vehicle. For example, the one or more steering mechanisms of the vehicle may control a direction in which the vehicle is moving. In another example, the convertedsteering directions may be sent to the vehicle systems module utilizing a communication protocol

Further still, in one embodiment, the one or more steering mechanisms may be adjusted, based on the convertedsteering directions. For example, the one or more steering mechanisms may be adjusted by the vehicle systems module to move the vehicle laterally to the center of the path. In another example, the one or more steering mechanisms may be adjusted to move the vehicle so that the orientation of the vehicle is straight with respect to the path. In another embodiment, the vehiclelocation controlling may be performed in real-time within the vehicle

Also, in one embodiment, the DNN may control directional stability by utilizing predictions with a reduced confidence. For example, the DNN may implement a classification scheme with a reduced confidence for more smooth and stable vehicledirection control. 

In addition, in one embodiment, a second DNN may perform object detection within the path. For example, the second DNN may be included within the vehicle and may communicate to other modules within the vehicles (e.g., via middleware, etc.). In another example, the second DNN may receive the image data, and may outputobject data indicating whether an object (e.g., a person, animal, etc.) is in the image data. In yet another example, the object data may be sent from the second DNN to the controller module (e.g., via middleware, etc.). In still another example, the controller module may determine whether a size of the object in the image data is a predetermined percentage of a size of the image in the image data

Further, in one example, the controller module may send one or more commands to the vehiclesystems (e.g., to change a course of the vehicle, to stop a functioning of the vehicle, etc.) in response to determining that the size of the object in the image data is equal to or greater than the predetermined percentage of the size of the image in the image data. This may provide a safety mechanism that may stop the vehicle when an object of a predetermined size is on the path

Further still, in one embodiment, a third DNN may perform obstacle detection associated with the path. For example, the third DNN may be included within the vehicle and may communicate to other modules within the vehicles (e.g., via middleware, etc.). In another example, the third DNN may receive the image data, and may outputobstacle data such as a set of weights indicating a likelihood of one or more obstacles at various locations and distances along the path. For instance, the third DNN may implement simultaneous localization and mapping (SLAM) to identify a location of the vehicle within a scene indicated by the image data and provide information about a relative location of static objects within the scene

Also, in one example, the obstacle data may be sent from the third DNN to the controller module (e.g., via middleware, etc.). In another example, the controller module may adjust the location of the vehicle, utilizing the obstacle data. This may help the vehicle avoid static obstacles in an alongside the path

In this way, a vehicle may be autonomously controlled, utilizing steering directions derived from DNN analysis of image data. Additionally, a DNN may perform an estimation of both vehicleorientation (3 classes) and lateral position with respect to path (3 more classes), for a total of 6 classes. Further, a loss function may be implemented during specific DNN training. Further still, a reduced confidenceimplementation may be performed within a DNN. Also, object and obstacle detection may be performed during autonomous navigation via additional DNNs. In addition, these features may be implemented utilizing on-board, real-time processing.

Brief Description:

Figure 2 illustrates a parallel  processing unit, in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates a parallelprocessing unit PPU 202, in accordance with one embodiment. In one embodiment, the PPU 202 is a multi-threaded processor that is implemented on one or more integrated circuit devices. The PPU 202 is a latency hiding architecture designed to processmany threads in parallel. A thread (i.e., a thread of execution) is an instantiation of a set of instructions configured to be executed by the PPU 202. In one embodiment, the PPU 202 is a graphics processing unit (GPU) configured to implement a graphics rendering pipeline for processing three-dimensional (3D) graphics data in order to generate two-dimensional (2D) image data for display on a display devices such as a liquid crystal display (LCD) device. In other embodiments, the PPU 202 may be utilized for performing general-purpose computations. While one exemplary parallel processor is provided herein for illustrative purposes, it should be strongly noted that such processor is set forth for illustrative purposes only, and that any processor may be employed to supplement and/or substitute for the same 

 

One or more PPU 202 s may be configured to accelerate thousands of High Performance Computing (HPC), data center, and machine learning applications. The PPU 202 may be configured to accelerate numerous deep learning systems and applications including autonomous vehicle platforms, deep learning, high-accuracy speech, image, and text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and the like. 

 

As shown in Figure 2, the PPU 202 includes an Input/Output I/O unit 208, a front end unit 212, a scheduler units 214, a work distribution unit 216, a hub 210, a crossbarXBar 218, one or more general processing clusters GPCs 204, and one or more memory partition unit 220 s. The PPU 202 may be connected to a host processor or other PPU 202 s via one or more high-speed NVLinkNVLinks 206 interconnect. The PPU 202 may be connected to a host processor or other peripheral devices via an interconnect 224. The PPU 202 may also be connected to a local memory comprising a number of memory devicess 222 s. In one embodiment, the local memory may comprise a number of dynamic random access memory (DRAM) devices. The DRAM devices may be configured as a high-bandwidth memory (HBM) subsystem, with multiple DRAM dies stacked within each device

 

The NVLinks 206  interconnect enables systems to scale and include one or more PPU 202 s combined with one or more CPU, supports cache coherence between the PPU 202  and CPUs, and CPU mastering. Data and/or commands may be transmitted by the NVLinks 206 through the hub 210 to/from other units of the PPU 202 such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). The NVLinks 206 is described in more detail in conjunction with Figure 5.

 

The I/O unit 208 is configured to transmit and receive communications (i.e., commands, data, etc.) from a host processor (not shown) over the interconnect 224. The I/O unit 208 may communicate with the host processor directly via the interconnect 224 or through one or more intermediate devices such as a memory bridge. In one embodiment, the I/O unit 208 may communicate with one or more other processors, such as one or more the PPU 202 s via the interconnect 224. In one embodiment, the I/O unit 208 implements a Peripheral Component Interconnect Express (PCIe) interface for communications over a PCIe bus and the interconnect 224 is a PCIe bus. In alternative embodiments, the I/O unit 208 may implement other types of well-known interfaces for communicating with external devices

 

The I/O unit 208 decodes packets received via the interconnect 224. In one embodiment, the packets represent commands configured to cause the PPU 202 to perform various operations. The I/O unit 208 transmits the decoded commands to various other units of the PPU 202 as the commands may specify. For example, some commands may be transmitted to the front end unit 212. Other commands may be transmitted to the hub 210 or other units of the PPU 202 such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). In other words, the I/O unit 208 is configured to route communications between and among the various logical units of the PPU 202

 

In one embodiment, a program executed by the host processor encodes a command stream in a buffer that provides workloads to the PPU 202 for processing. A workload may comprise several instructions and data to be processed by those instructions. The buffer is a region in a memory that is accessible (i.e., read/write) by both the host processor and the PPU 202. For example, the host interface unit 202 may be configured to access the buffer in a system memory connected to the interconnect 224 via memory requests transmitted over the interconnect 224 by the I/O unit 208. In one embodiment, the host processor writes the command stream to the buffer and then transmits a pointer to the start of the command stream to the PPU 202. The front end unit 212 receives pointers to one or more command streams. The front end unit 212 manages the one or more streams, reading commands from the streams and forwarding commands to the various units of the PPU 202

 

The front end unit 212 is coupled to a scheduler units 214 that configures the various GPCs 204 to process tasks defined by the one or more streams. The scheduler units 214 is configured to track state information related to the various tasks managed by the scheduler units 214. The state may indicate which GPCs 204 a task is assigned to, whether the task is active or inactive, a priority level associated with the task, and so forth. The scheduler units 214 manages the execution of a plurality of tasks on the one or more GPCs 204

 

The scheduler units 214 is coupled to a work distribution unit 216 that is configured to dispatch tasks for execution on the GPCs 204. The work distribution unit 216 may track a number of scheduled tasks received from the scheduler units 214. In one embodiment, the work distribution unit 216 manages a pending task pool and an active task pool for each of the GPCs 204. The pending task pool may comprise a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPCs 204. The active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by the GPCs 204. As a GPCs 204 finishes the execution of a task, that task is evicted from the active task pool for the GPCs 204 and one of the other tasks from the pending task pool is selected and scheduled for execution on the GPCs 204. If an active task has been idle on the GPCs 204, such as while waiting for a data dependency to be resolved, then the active task may be evicted from the GPCs 204 and returned to the pending task pool while another task in the pending task pool is selected and scheduled for execution on the GPCs 204

 

The work distribution unit 216 communicates with the one or more GPCs 204 via XBar 218. The XBar 218 is an interconnect network that couples many of the units of the PPU 202 to other units of the PPU 202. For example, the XBar 218 may be configured to couple the work distribution unit 216 to a particular GPCGPC 204. Although not shown explicitly, one or more other units of the PPU 202 may also be connected to the XBar 218 via the hub 210

 

The tasks are managed by the scheduler units 214 and dispatched to a GPCs 204 by the work distribution unit 216. The GPCs 204 is configured to process the task and generate results. The results may be consumed by other tasks within the GPCs 204, routed to a different GPCs 204 via the XBar 218, or stored in the memory devicess 222. The results can be written to the memory devicess 222 via the memory partition unit 220, which implement a memory interface for reading and writing data to/from the memory devicess 222. The results can be transmitted to another memory devicess 222 or CPU via the NVLinks 206. In one embodiment, the PPU 202 includes a number U of memory partition unit 220 that is equal to the number of separate and distinct memory devicesmemory devicess 222 coupled to the PPU 202. A memory partition unit 220 will be described in more detail below in conjunction with Figure 4

 

In one embodiment, a host processor executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU 202. In one embodiment, multiple compute applications are simultaneously executed by the PPU 202 and the PPU 202 provides isolation, quality of service (QoS), and independent address spaces for the multiple compute applications. An application may generate instructions (i.e., API calls) that cause the driver kernel to generate one or more tasks for execution by the PPU 202. The driver kernel outputs tasks to one or more streams being processed by the PPU 202. Each task may comprise one or more groups of related threads, referred to herein as a warp. In one embodiment, a warp comprises 32 related threads that may be executed in parallel. Cooperating threads may refer to a plurality of threads including instructions to perform the task and that may exchange data through shared memory. Threads and cooperating threads are described in more detail in conjunction with Figure 5

Brief Description:

Figure 3 illustrates a general processing cluster within the parallelprocessing unit of Figure 2 , in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates a GPCs 204 of the PPU 202 of Figure 2, in accordance with one embodiment. As shown in Figure 3, each GPCs 204 includes a number of hardware units for processing tasks. In one embodiment, each GPCs 204 includes a pipeline manager 302, a pre-raster operations, PROP unit 310, a raster engine 312, a work distributioncrossbarWDX 318, a memory management unitMMU 314, and one or more dataprocessing Clusters DPCs 318. It will be appreciated that the GPCs 204 of Figure 3 may includeother hardware units in lieu of or in addition to the units shown in Figure 3

 

In one embodiment, the operation of the GPCs 204 is controlled by the pipeline manager 302. The pipeline manager 302 manages the configuration of the one or more DPCs 318 for processing tasks allocated to the GPCs 204. In one embodiment, the pipeline manager 302 may configure at least one of the one or more DPCs 318 to implement at least a portion of a graphics rendering pipeline. For example, DPCs 318 may be configured to execute a vertex shader program on the programmable streaming multiprocessor SM 306. The pipeline manager 302 may also be configured to route packets received from the work distribution unit 216 to the appropriate logical units within the GPCs 204. For example, some packets may be routed to fixed function hardware units in the PROP unit 310 and/or raster engine 312 while other packets may be routed to the DPCs 318 for processing by the primitive engine 304 or the SM 306. In one embodiment, the pipeline manager 302 may configure at least one of the one or more DPCs 318 to implement a neural network model and/or a computing pipeline

 

The PROP unit 310 is configured to route data generated by the raster engine 312 and the DPCs 318 to a Raster Operations (ROP) unit in the memory partition unit 220, described in more detail in conjunction with Figure 4. The PROP unit 310 may also be configured to perform optimizations for color blending, organize pixel data, perform address translations, and the like. 

 

The raster engine 312 includes a number of fixed function hardware units configured to perform various raster operations. In one embodiment, the raster engine 312 includes a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, and a tile coalescing engine. The setup engine receives transformed vertices and generates plane equations associated with the geometric primitive defined by the vertices. The plane equations are transmitted to the coarse raster engine to generate coverage information (e.g., an x,y coverage mask for a tile) for the primitive. The output of the coarse raster engine is transmitted to the culling engine where fragments associated with the primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. Those fragments that survive clipping and culling may be passed to the fine raster engine to generate attributes for the pixel fragments based on the plane equations generated by the setup engine. The output of the raster engine 312comprisesfragments to be processed, for example, by a fragment shader implemented within DPCs 318

 

Each DPCs 318 included in the GPCs 204 includes an M-Pipe Controller MPC 308, a primitive engine 304, and one or more SM 306 s. The MPC 308 controls the operation of DPCs 318, routing packets received from the pipeline manager 302 to the appropriate units in the DPCs 318. For example, packets associated with a vertex may be routed to the primitive engine 304, which is configured to fetch vertex attributes associated with the vertex from the memory devicess 222. In contrast, packets associated with a shader program may be transmitted to the SM 306.

 

The SM 306comprises a programmable streaming processor that is configured to process tasks represented by a number of threads. Each SM 306 is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently. In one embodiment, the SM 306 implements a SIMD (Single-Instruction, Multiple-Data) architecture where each thread in a group of threads (i.e., a warp) is configured to process a different set of data based on the same set of instructions. All threads in the group of threads execute the same instructions. In another embodiment, the SM 306 implements a SIMT (Single-Instruction, Multiple Thread) architecture where each thread in a group of threads is configured to process a different set of data based on the same set of instructions, but where individual threads in the group of threads are allowed to diverge during execution. In one embodiment, a program counter, call stack, and execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within the warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. When execution state is maintained for each individual thread, threads executing the same instructions may be converged and executed in parallel for maximum efficiency. The SM 306 will be described in more detail below in conjunction with Figure 5

 

The MMU 314 provides an interface between the GPCs 204 and the memory partition unit 220. The MMU 314 may provide translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In one embodiment, the MMU 314 provides one or more translation lookaside buffers (TLBs) for performing translation of virtual addresses into physical addresses in the memory devicess 222.

Brief Description:

Figure 4 illustrates a memory partition unit of the parallel  processing unit of Figure 2, in accordance with one embodiment.

Detailed Description:

 Figure 4 illustrates a memory partition unit 220 of the PPU 202 of Figure 2, in accordance with one embodiment. As shown in Figure 4, the memory partition unit 220 includes a Raster Operations (ROP) unit 402, a level two (L2) cache 404, and a memory interface 406. The memory interface 406 is coupled to the memory 220. Memory interface 406 may implement 32, 64, 128, 1024-bit data buses, or the like, for high-speed data transfer. In one embodiment, the PPU 202 incorporates u memory interface 406 s, one memory interface 406 per pair of memory partition unit 220 s, where each pair of memory partition unit 220 s is connected to a corresponding memory devices 222. For example, PPU 202 may be connected to up to y memory devicesmemory devicess 222 s, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random access memory (GDDR5 SDRAM). 

 

In one embodiment, the memory interface 406 implements an HBM2 memory interface and y equalshalf U. In one embodiment, the HBM2 memory stacks are located on the same physical package as the PPU 202, providingsubstantial power and area savings compared with conventional GDDR5 SDRAMSDRAM systems. In one embodiment, each HBM2 stack includes four memory dies and y equals 4, with HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits

 

In one embodiment, the memory partition unit 220 supports Single-Error Correcting Double-Error Detecting (SECDED) Error Correction Code (ECC) to protect data. ECC provides higher reliability for compute applications that are sensitive to data corruption. Reliability is especially important in large-scale cluster computing environments where PPU 202 s processvery large datasets and/or run applications for extended periods

 

In one embodiment, the PPU 202 implements a multi-level memory hierarchy. In one embodiment, the memory partition unit 220 supports a unified memory to provide a single unified virtual address space for CPU and PPU 202 memory, enabling data sharing between virtual memory systems. In one embodiment the frequency of accesses by a PPU 202 to memory located on other processors is traced to ensure that memory pages are moved to the physical memory of the PPU 202 that is accessing the pages more frequently. In one embodiment, the NVLinks 206 supports address translation services allowing the PPU 202 to directly access a CPU‘s page tables and providingfull access to CPUmemory by the PPU 202

 

In one embodiment, copy enginestransferdata between multiple PPU 202 s or between PPU 202 s and CPUs. The copy engines can generate page faults for addresses that are not mapped into the page tables. The memory partition unit 220 can then service the page faults, mapping the addresses into the page table, after which the copy engine can perform the transfer. In a conventional system, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing the available memory. With hardware page faulting, addresses can be passed to the copy engines without worrying if the memory pages are resident, and the copy process is transparent. 

 

Data from the memory partition unit 220 or other system memory may be fetched by the memory partition unit 220 and stored in the l2 cache 404, which is located on-chip and is shared between the various GPCs 204 s. As shown, each memory partition unit 220 includes a portion of the l2 cache 404 associated with a corresponding memory devicess 222. Lower level caches may then be implemented in various units within the GPCs 204 s. For example, each of the SM 306 s may implement a level one (L1) cache. The l1 cache is private memory that is dedicated to a particular SM 306. Data from the l2 cache 404 may be fetched and stored in each of the l1 caches for processing in the functional units of the SM 306 s. The l2 cache 404 is coupled to the memory interface 406 and the XBar 218

 

The ROP unit 402 performs graphics raster operations related to pixel color, such as color compression, pixel blending, and the like. The ROP unit 402 also implements depth testing in conjunction with the raster engine 312, receiving a depth for a sample location associated with a pixel fragment from the culling engine of the raster engine 312. The depth is tested against a corresponding depth in a depth buffer for a sample location associated with the fragment. If the fragment passes the depth test for the sample location, then the ROP unit 402 updates the depth buffer and transmits a result of the depth test to the raster engine 312. It will be appreciated that the number of memory partition unit 220 s may be different than the number of GPCs 204 s and, therefore, each ROP unit 402 may be coupled to each of the GPCs 204 s. The ROP unit 402 tracks packets received from the different GPCs 204 s and determines which GPCs 204 that a result generated by the ROP unit 402 is routed to through the XBar 218

Brief Description:

Figure 5 illustrates the streaming multi-processor of Figure 3, in accordance with one embodiment.

Detailed Description:

Figure 5 illustrates the streaming multi-processor 306 of Figure 3, in accordance with one embodiment. As shown in Figure 5, the SM 306 includes an instruction cache 502, one or more scheduler units 214 s, a register file 506, one or more processing cores 508 s, one or more special function unitsSFUs 604, one or more load/store unitsLSUs 512, an interconnect network 514, a shared memory/L1 cache 516

 

As described above, the work distribution unit 216 dispatches tasks for execution on the GPCs 204 s of the PPU 202. The tasks are allocated to particular DPCs 318 within a GPCs 204 and, if the task is associated with a shader program, the task may be allocated to an SM 306. The scheduler units 214 receives the tasks from the work distribution unit 216 and manages instruction scheduling for one or more thread blocks assigned to the SM 306. The scheduler units 214schedulesthread blocks for execution as warps of parallel threads, where each thread block is allocated at least one warp. In one embodiment, each warp executes32 threads. The scheduler units 214 may manage a plurality of different thread blocks, allocating the warps to the different thread blocks and then dispatching instructions from the plurality of different cooperative groups to the various functional units (i.e., processing cores 508 s, SFUs 604, and LSUs 512) during each clock cycle

 

Cooperative groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (i.e., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces

 

Cooperative groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on the threads in a cooperative group. The programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. Cooperative Groups primitives enable new patterns of cooperative parallelism, including producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks

 

A dispatch unit 504 is configured to transmit instructions to one or more of the functional units. In the embodiment, the scheduler units 214 includes two dispatch unit 504 s that enable two different instructions from the same warp to be dispatched during each clock cycle. In alternative embodiments, each scheduler units 214 may include a single dispatch unit 504 or additional dispatch unit 504 s. 

 

Each SM 306 includes a register file 506 that provides a set of registers for the functional units of the SM 306. In one embodiment, the register file 506 is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file 506. In another embodiment, the register file 506 is divided between the different warps being executed by the SM 306. The register file 506 provides temporary storage for operands connected to the data paths of the functional units

 

Each SM 306comprises L processing cores 508 s. In one embodiment, the SM 306 includes a large number (e.g., 128, etc.) of distinct processing cores 508 s. Each processing cores 508 may include a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes a floating point arithmeticlogic unit and an integer arithmetic logic unit. In one embodiment, the floating point arithmetic logic units implement the IEEE 754-2008 standard for floating point arithmetic. In one embodiment, the processing cores 508 s include 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores

 

Tensor cores configured to perform matrix operations, and, in one embodiment, one or more tensor cores are included in the processing cores 508 s. In particular, the tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In one embodiment, each tensor core operates on a 4.times.4 matrix and performs a matrix multiply and accumulateaccumulate operation D=A.times.B+C, where A, B, C, and D are 4.times.4 matrices

 

In one embodiment, the matrix multiplyinputs A and B are 16-bit floating point matrices, while the accumulation matrices C and D may be 16-bit floating point or 32-bit floating point matrices. Tensor cores operate on 16-bit floating point input data with 32-bit floating point accumulation. The 16-bit floating point multiply requires 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with the other intermediate products for a 4.times.4.times.4 matrix multiply. In practice, tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements. An API, such as CUDA 9 C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. At the CUDA level, the warp-level interface assumes 16.times.16 size matrices spanning all 32 threads of the warp

 

Each SM 306 also comprises M SFUs 604 that perform special functions (e.g., attribute evaluation, reciprocal square root, and the like). In one embodiment, the SFUs 604 may include a tree traversal unit configured to traverse a hierarchical tree data structure. In one embodiment, the SFUs 604 may includetexture unit configured to perform texture map filtering operations. In one embodiment, the texture units are configured to load texture maps (e.g., a 2D array of texels) from the memory 220 and sample the texture maps to produce sampled texture values for use in shader programs executed by the SM 306. In one embodiment, the texture maps are stored in the shared memory/L1 cache 516. The texture unitsimplement texture operations such as filtering operations using mip-maps (i.e., texture maps of varying levels of detail). In one embodiment, each SM 340 includes two texture units

 

Each SM 306 also comprises N LSUs 512 that implement load and store operations between the shared memory/L1 cache 516 and the register file 506. Each SM 306 includes an interconnect network 514 that connects each of the functional units to the register file 506 and the LSUs 512 to the register file 506, shared memory/L1 cache 516. In one embodiment, the interconnect network 514 is a crossbar that can be configured to connect any of the functional units to any of the registers in the register file 506 and connect the LSUs 512 to the register file and memory locations in shared memory/L1 cache 516

 

The shared memory/L1 cache 516 is an array of on-chip memory that allows for data storage and communication between the SM 306 and the 304 435 and between threads in the SM 306. In one embodiment, the shared memory/L1 cache 516comprises 128 KB of storage capacity and is in the path from the SM 306 to the memory partition unit 220. The shared memory/L1 cache 516 can be used to cachereads and writes. One or more of the shared memory/L1 cache 516, l2 cache 404, and memory 220 are backing stores

 

Combining data cache and shared memory functionality into a single memoryblock provides the best overall performance for both types of memory accesses. The capacity is usable as a cache by programs that do not use shared memory. For example, if shared memory is configured to use half of the capacity, texture and load/store operations can use the remaining capacity. Integration within the shared memory/L1 cache 516 enables the shared memory/L1 cache 516 to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data

 

When configured for general purposeparallelcomputation, a simpler configuration can be used compared with graphics processing. Specifically, the fixed function graphics processing units shown in Figure 2, are bypassed, creating a much simpler programming model. In the general purpose parallel computation configuration, the work distribution unit 216 assigns and distributes blocks of threads directly to the DPCs 318. The threads in a block execute the same program, using a unique thread ID in the calculation to ensure each thread generates unique results, using the SM 306 to execute the program and perform calculations, shared memory/L1 cache 516 to communicate between threads, and the LSUs 512 to read and write global memory through the shared memory/L1 cache 516 and the memory partition unit 220. When configured for general purposeparallelcomputation, the SM 306 can also write commands that the scheduler units 214 can use to launch new work on the DPCs 318

 

The PPU 202 may be included in a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and the like. In one embodiment, the PPU 202 is embodied on a single semiconductor substrate. In another embodiment, the PPU 202 is included in a system-on-a-chip (SoC) along with one or more other devices such as additional PPU 202 s, the memory 204, a reduced instruction set computer (RISC) CPU, a memory management unit (MMU), a digital-to-analog converter (DAC), and the like. 

 

In one embodiment, the PPU 202 may be included on a graphics card that includes one or more memory devicess 222 s. The graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In yet another embodiment, the PPU 202 may be an integrated graphics processing unit (iGPU) or parallel processor included in the chipset of the motherboard

 

Exemplary Computing System

 

systems with multiple GPUs and CPUs are used in a variety of industries as developers expose and leverage more parallelism in applications such as artificial intelligence computing. High-performance GPU-accelerated systems with tens to many thousands of compute nodes are deployed in data centers, research facilities, and supercomputers to solve ever larger problems. As the number of processing devices within the high-performance systems increases, the communication and data transfer mechanisms need to scale to support the increased 

Brief Description:

Figure 6 is a conceptual diagram of a processing system implemented using the PPU of Figure 2 , in accordance with one embodiment.

Detailed Description:

Figure 6 is a conceptual diagram of a processing system 600 implemented using the PPU 202 of Figure 2, in accordance with one embodiment. The exemplary system 700 may be configured to implement the method 100 shown in FIG. 1. The processing system 600 includes a CPU 602, switch 604, and multiple PPU 202 s each and respective memories 220. The NVLinks 206 provides a high-speed communication links between each of the PPU 202 s. The switch 604interfaces between the interconnect 224 and the CPU 602. The PPU 202 s, memories 220, and NVLinks 206 s may be situated on a single semiconductor platform to form a parallel processing module 606

 

In the context of the present description, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit fabricated on a die or chip. It should be noted that the term single semiconductor platform may also refer to multi-chip modules with increased connectivity which simulate on-chip operation and make substantial improvements over utilizing a conventional bus implementation. Of course, the various circuits or devices may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. Alternately, the parallel processing module 606 may be implemented as a circuit board substrate and each of the PPU 202 s and/or memories 220 may be packaged devices. In one embodiment, the CPU 602, switch 604, and the parallel processing module 606 are situated on a single semiconductor platform. 

 

In one embodiment, the signaling rate of each NVLinks 206 is 20 to 25 Gigabits/second and each PPU 202 includes six NVLinks 206interfaces (as shown in Figure 6, five NVLinks 206interfaces are included for each PPU 202). Each NVLinks 206 provides a data transfer rate of 25 Gigabytes/second in each direction, with six linksproviding 200 Gigabytes/second. The NVLinks 206 s can be used exclusively for PPU-to-PPU communication as shown in Figure 6, or some combination of PPU-to-PPU and PPU-to-CPU, when the CPU 602 also includes one or more NVLink 206 interfaces

 

In one embodiment, the NVLinks 206 allows direct load/store/atomic access from the CPU 602 to each PPU‘s 200 memory 220. In one embodiment, the NVLinks 206 supports coherency operations, allowing data read from the memories 220 to be stored in the cache hierarchy of the CPU 602, reducing cache access latency for the CPU 602. In one embodiment, the NVLinks 206 includes support for address translation services (ATS), allowing the PPU 202 to directly access page tables within the CPU 602. One or more of the NVLinks 206 s may also be configured to operate in a low-power mode

Brief Description:

Figure 7 illustrates an exemplary system in which the various architecture and/or functionality of the various previous embodiments may be implemented.

Detailed Description:

Figure 7 illustrates an exemplary system 700 in which the various architecture and/or functionality of the various previous embodiments may be implemented. The exemplary system 700 may be configured to implement the method for performing autonomous path navigation using deep neural networks 100 shown in Figure 1

As shown, an exemplary system 700 is provided including at least one central processing unit 602 that is connected to a communication bus712 (deleted). The communication bus712 (deleted) may be implemented using any suitable protocol, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol(s). The exemplary system 700 also includes a main memory 702. Control logic (software) and data are stored in the main memory 702 which may take the form of random access memory (RAM). 

The exemplary system 700 also includes input devices 708, the parallel processing system 606, and display devices 706, i.e. a conventional CRT (cathode ray tube), LCD (liquid crystal display), LED (light emitting diode), plasma display or the like. User input may be received from the input devices 708, e.g., keyboard, mouse, touchpad, microphone, and the like. Each of the foregoing modules and/or devices may even be situated on a single semiconductor platform to form the exemplary system 700. Alternately, the various modules may also be situated separately or in various combinations of semiconductor platforms per the desires of the user

Further, the exemplary system 700 may be coupled to a network (e.g., a telecommunications network, local area network (LAN), wireless network, wide area network (WAN) such as the internet, peer-to-peer network, cable network, or the like) through a network interface 704 for communication purposes

The exemplary system 700 may also include a secondary storage (not shown). The secondary storage608 (deleted) includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (DVD) drive, recording device, universal serial bus (USB) flash memory. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner

Computer programs, or computer control logic algorithms, may be stored in the main memory 702 and/or the secondary storage. Such computer programs, when executed, enable the exemplary system 700 to perform various functions. The main memory 702, the storage, and/or any other storage are possible examples of computer-readable media

The architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the exemplary system 700 may take the form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic

While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents

Machine Learning

deep neural networks (DNNs) developed on processors, such as the PPU 202 have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications. Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time. A child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching. Similarly, a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects

At the simplest level, neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon. An artificial neuron or perceptron is the most basic model of a neural network. In one example, a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object

A deep neural network (DNN) model includes multiple layers of many connected perceptrons (e.g., nodes) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of the DLL model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. The second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors. The next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand

Once the DNN is trained, the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (the process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time

During training, data flows through the DNN in a forward propagation phase until a prediction is produced that indicates a label corresponding to the input. If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset. Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions that are supported by the PPU 202. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information

Neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. With thousands of processing cores, optimized for matrix math operations, and delivering tens to hundreds of TFLOPS of performance, the PPU 202 is a computing platform capable of delivering performance required for deep neural network-based artificial intelligence and machine learning applications

Brief Description:

Figure 8 illustrates an exemplary system for performing autonomous path navigation using deep neural networks, in accordance with one embodiment.

Detailed Description:

Figure 8 illustrates an exemplary system 700 for performing autonomous path navigation using deep neural networks, according to one embodiment. As shown, the exemplary system 700 includes a camera module 804 in communication with a TrailNet DNN module 806, an object detection DNN module 808, and an obstacle detector module 810. In one embodiment, the camera module 804 may provide visualization data (e.g., image data, radar data, depth data, lidar data, infrared data, sonar data, etc.) to the TrailNet DNN module 806, the object detection DNN module 808, and the obstacle detector module 810. In another embodiment, the camera module may manage one or more cameras of a variety of different types within a vehicle

Additionally, in one embodiment, the TrailNet DNN module 806 may receive visualization data from the camera module 804, and may outputvehicle location information. For example, the TrailNet DNN module 806 may outputthree numbers that indicate the orientation of the vehicle with respect to the path, and three numbers output by the DNN that indicate the lateral position of the vehicle with respect to the path

Further, in one embodiment, the object detection DNN module 808 may receive visualization data from the camera module 804, and may output an indication as to whether a person or large animal is present within the visualization data (e.g., utilizing a DNN such as a YOLO DNN, etc.). In another embodiment, the obstacle detector module 810 may receive visualization data from the camera module 804, and may output a set of weights indicating a likelihood of obstacles at various locations and distances (e.g., utilizing simultaneous location and mapping (SLAM), etc.). In this way, the obstacle detector module 810 may identify a location of a camera within a scene, and may provide information about a relative location of static objects within the scene

Further still, the exemplary system 700 includes a controller module 812. In one embodiment, the controller module 812 may receive vehicle location information from the TrailNet DNN module 806 (e.g., representing the vehicle’spath orientation and lateral position), and may create steering directions (e.g., a steering angle for the vehicle, etc.), utilizing the vehicle location information

Also, the exemplary system 700 includes a communication module 814. The communication module 814 may receive the steering directions in a first format (e.g., a ROS protocol, etc.) from the controller module 812, and may convert then to messages in a second format (e.g., an MAV protocol, etc.). The communication module 814 may then broadcast the converted messages in the second format to a vehicle systems module 818, utilizing a communication protocol 816. 

In addition, in one embodiment, the vehicle systems module 818 may receive the converted messages, and may use such messages to control one or more physical components of the vehicle (e.g., in order to control movement of the vehicle, etc.). In this way, the controller module 812 may compute steering directions and send the steering directions to the communication module 814, which may convert the directions to a different format and send them to the vehicle systems module 818 for implementation at the vehicle

Further, the exemplary system 700 includes a manual input device module 820. The manual input device module 820 may receive input from a user (e.g., a startup indicator, a kill switch selection, a manual override selection, etc.), and may send such information to the controller module 812. In this way, manual user input may be provided to the exemplary system 700

Further still, the camera module 804, the TrailNet DNN module 806, the object detection DNN module 808, the obstacle detector module 810, the controller module, the communication module 814, and the manual input device module 820 are all implemented within a single processor 802. Communication between such modules may be made using a predetermined protocol (e.g., a ROS protocol, etc.). The vehicle systems module 818 is implemented within control hardware 816 of the vehicle that is separate from the processor 802

Low-Flying Autonomous MAV Trail Navigation Using Deep Neural Networks for environmental awareness 

Introduction 

In one embodiment, autonomously following a man-made trail in the forest is a challenging problem for robotic systems. Applications for such a capabilityinclude, among others, search-and-rescue, environmental mapping, wilderness monitoring, and personal videography. Micro aerial vehicles (MAVs) offer a number of advantages for solving this problem: they are not limited by the difficulty or traversability of the terrain, they are capable of high speeds, and they have the ability to quickly switch from one trail to another by flying through or over the forest

In order for a complete MAV system to follow a trail, it may not only detect the trail in order to determine its steering commands, but it also may be aware of its surroundings. An MAV that lacks such a capability is in danger of colliding with overhanging branches or, even worse, with people or pets using the trail. Environmental awareness is therefore one component for trail-following robots, particularly for low-flying MAVs

In one embodiment, an MAV system is provided for autonomous trail following. The system may use a deep neural network (DNN) (called TrailNet in this example) for determining the MAV’s view orientation and lateral offset within the trail. The computed pose may then be used for continuous control to allow the MAV to fly over forest trails. In addition, vision modules for environmental awareness may enable the MAV to detect and avoid people and pets on the trail, as well as to estimate depth in front of the robot for the purpose of reactively avoiding obstacles. All subsystems may run simultaneously in real time on board the MAV using a standalone computing device. On-board processing may be used to ensure the safety of this mission-critical system

In one embodiment, a hardware/software system may be implemented for environmentally aware autonomous trail navigation using DNNs that may run in real time on board an MAV. 

In another embodiment, a DNN architecture may be implemented for trail detection with improved accuracy and computational efficiency via a less confident classification scheme for more stable control as well as additional categories for estimating both view orientation and lateral offset

In yet another embodiment, a methodology for retraining a DNN may be implemented with 6 categories (for view orientation and lateral offset) by transfer learning from a network with 3 categories (orientation only). 

System Description

To ensure a robust and flexible platform for forest flight experiments, inexpensive off-the-shelf components may be used. One exemplary hardware setup may include a quadcopter or drone with autopilot software, an integrated computing device, and a carrier board. The vision processing may use a forward-facing camera. All processing may be done on the integrated computing device

The MAV may be equipped with a downward-facing, high framerate optical flow sensor with sonar and lidar. Developed the flow sensor may provide reliable metric position and attitude estimation by computing 2D optical flow on the ground, then using ground distance measurements and gyro readings to compute vehicle ground speed. Once computed, this speed may be sent to an extended Kalman filter (EKF) running on flight controller hardware to be fused with other sensor measurements (e.g., IMU) for even more precise state estimation

Brief Description:

FIG. 9 illustrates an exemplary software architecture, in accordance with one embodiment.

Detailed Description:

The diagram in Figure 9 illustrates an exemplary software architecture 900, according to one embodiment. A flight stack may be used as flight firmware 902 for the flight controller hardware autopilot 920. The flight firmware 902 may provide flexible and robust control over a range of MAV configurations. It may includesoftware-in-the-loop (SITL) simulation, which may be used for controller testing and debugging. The on-board computer may communicate with the flight firmware 902 via a predetermined protocol (e.g., MavLink, etc.). The robotic operating systemROS 904 may be run on the on-board computer. As shown in Figure 9, the architecture 900 uses the following ROS nodes 906-910: a camera driver node 906 for reading USB camera input, a joystick driver node 908 for reading game controller commands used for teleoperation (e.g., during training and for emergency override), and a messaging bridge to external autopilot module 910 for communicating with the flight firmware 902

In one embodiment, vision processing may be performed by three nodes 912-916. A TrailNet DNN node 912 applies a trained TrailNet DNN. An object detection node 914 runs real-time object detection DNN. An obstacle detector node 916 runs a visual odometry algorithm, whose output may be converted to a camera-centric depth map for obstacle detection and avoidance

The controller node 918 may be responsible for computing desired movement commands (waypoints) per current TrailNet DNN predictions, detected obstacles/objects and teleoperation commands. For safety, the teleoperation commands may take precedence over DNN predictions, so a human operator may override the MAV at any time to prevent undesirable movements. The computed waypoint may then be sent to the messaging bridge to external autopilot module 910 which resubmits it to the flight firmware 902 via a controller node 918. A righthanded ENU (east-north-up) inertial coordinate frame may be used for waypoint computation, which may be converted to the flight firmware 902‘s right-handed NED (north-east-down) coordinate frame


Parts List

100

method for performing autonomous path navigation using deep neural networks

102

image data is received at a deep neural network (DNN)

104

DNN determines both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data

106

location of the vehicle is controlled, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path

202

PPU

204

GPCs

206

NVLinks

208

I/O unit

210

hub

212

front end unit

214

scheduler units

216

work distribution unit

218

XBar

220

memory partition unit

222

memory devicess

224

interconnect

302

pipeline manager

304

primitive engine

306

SM

308

MPC

310

PROP unit

312

raster engine

314

MMU

316

WDX

318

DPCs

402

ROP unit

404

l2 cache

406

memory interface

502

instruction cache

504

dispatch unit

506

register file

508

processing cores

510

SFUs

512

LSUs

514

interconnect network

516

shared memory/L1 cache

600

processing system

602

CPU

604

switch

606

parallel processing module

700

exemplary system

702

main memory

704

network interface

706

display devices

708

input devices

710

communication bus

802

processor

804

camera module

806

TrailNet DNN module

808

object detection DNN module

810

obstacle detector module

812

controller module

814

communication module

816

control hardware

818

vehicle systems module

820

manual input device module

900

exemplary software architecture

902

flight firmware

904

ROS

906

camera driver node

908

joystick driver node

910

messaging bridge to external autopilot module

912

TrailNet DNN node

914

object detection node

916

obstacle detector node

918

controller node

920

flight controller hardware autopilot


Terms/Definitions

XBar

performance

disease diagnosis

collective operations

its surroundings

same program

checks

core

fixed function graphics processing units

processing system

predictions

work distribution

devices

number U

operands

text recognition systems

coaching

automobile

movements

shader program

technique

point-to-point communication protocol(s)

adult

flowchart

practice

ground distance measurements and gyro readings

variety

conventional CRT

cooperative parallelism

arbitration

pixel fragments

static obstacles

full access

floppy disk drive

diagram

contrast

board

pending task pool

matrix

integrated graphics processing unit

communication module

USB camera input

four memory

operation

likelihood

3 categories

input devices

capacity

more accurate results

tremendous amounts

substantial improvements

pictorial images

people and pets

texture units

others

application programming interface

object and obstacle detection

graphics double-data-rate, version

process

deep learning matrix arithmetic

coverage information

most basic model

data centers

accumulate operation D=A.times.B+C

parallel threads

memory bridge

culvert

local area network

call stack

address translation services

such information

limitation

RISC

DNN analysis

copy engines

preferred embodiment

downward-facing

program counter

related threads

priority level

maximum efficiency

32 slots

specific automobile brand

remote location

CUDA-C++ program

controller node

matrix multiply

converted steering directions

PCI-Express

real time

texture map filtering operations

y equals

object recognition

M-Pipe Controller (MPC)

clipping engine

factory automation

standalone computing device

environmental

data cache

matrix store operations

context

frame

hardware

distributed computing environment

communication purposes

at least one central processing unit

control hardware

implementation

human speech

vehicle ground speed

Cooperative Groups primitives

smart real-time language translation

probabilities

memory partition unit

application-specific system

parallel algorithms

private memory

new patterns

varying levels

reduced confidence

integer arithmetic logic unit

computing desired movement commands

scheduler units

many thousands

vision processing

conventional programming models

obstacle detection

multi-block granularities

clock cycle

next layer

importance

host interface unit

obstacle detector node

various locations and distances

vehicle

processing

hundreds

classification scheme

increased

two different instructions

sole unitary semiconductor-based integrated circuit

loss function

size matrices

block

personal videography

mission-critical system

various embodiments

software boundaries

various operations

both an orientation

control logic

peer-to-peer network

keyboard, mouse, touchpad, microphone

removable storage drive

well-known manner

workload

backward propagation phase

terrain

instance

other type

autonomous navigation

TrailNet DNN module

interconnect

difficulty or traversability

array

center

wilderness monitoring

matrix load

liquid crystal display

micro aerial vehicles

prediction

DRAM devices

memory devices

serial execution

large-scale cluster computing environments

culling engine

same instructions

32-bit floating point accumulation

digital camera

streaming video

scene

teleoperation commands

total

camera module

front end unit

depth buffer

various modules

sonar data

circuit board substrate

various shapes

street

HyperTransport

single-precision

various previous embodiments

matrix math operations

simultaneous location and mapping

generally infer new information

floating-point multiplications and additions

evaluation

virtual memory systems

user

service

second format

path center

convolution operations

cache coherence

programs

speed

inexpensive off-the-shelf components

route data

pipeline

subsystems

start

organizing groups

other commands

result

multi-threaded processor

semiconductor platforms

clean composition

self-driving cars

perceptron

laptop computer

lateral offset

type

new work

desires

multiple copy engine operations

kill switch selection

servers

tile

useful information

obstacle detector module

data center

instruction scheduling

various functions

possibilities

Peripheral Component Interconnect Express

storage

WDX

lieu

conventional system

comprises

trail-following robots

DNN correctly labels

same set

time

method for performing autonomous path navigation using deep neural networks

packets

neural network training and inferencing

predetermined protocol

at least one warp

include

MAV system

instructions

SDRAM systems

software reuse

defined groups

images

region

righthanded ENU (east-north-up) inertial coordinate frame

high speeds

TrailNet DNN node

digital versatile disk

appropriate units

external autopilot module

shared memory

high-performance systems increases

advantages

on-chip memory

ATM machines

texels

parallel processor

motherboard

vehicle control protocol

other storage

floating-point performance and bandwidth

flexible platform

syncthreads

computational efficiency

conversion

game consoles

DNNs

friends

functioning

molecular dynamics simulation

cable network

input

hard disk drive

processors

other PPUs

plurality

color compression

multi-chip modules

rover

smaller elements

convergence

thousands

memory management unit

example only

online search optimizations

parallel processing unit (PPU)

other intermediate products

PROP unit

removable storage unit

corresponding depth

driver kernel

high-speed data transfer

memory management unit (MMU)

DNN predictions

optical data

other peripheral devices

directional stability

compact disk drive

alternative embodiments

various logical units

I/O unit

Low-Flying Autonomous MAV Trail Navigation Using Deep Neural Networks

state

system

parallel computing performance

32-bit floating point addition

imaging devices

barrier

simplest level

PCIe slot

three numbers

higher reliability

universal serial bus

other words

systems

video data

pipeline manager

Peripheral Component Interconnect

memory interface

warp-level interface

high bandwidth memory stacks

molecular simulations

memory/L1 cache

unique thread

computation

digital-to-analog converter

computed waypoint

controlled mobile object

synchronization

obstacle

various circuits or devices

synchronous dynamic random access memory

opportunistic parallelism

graphics card

parallel processing module

various GPCs

MMU

training dataset

estimation

user input

MAVs

artificial neuron or perceptron

generate results

object

scheduled tasks

obstacles

PPU-to-PPU communication

physical memory

graphics processing unit

danger

fine raster engine

plurality of probabilities

input image

depth test

other desired system

size

airplane

addition

reading game controller commands

multiple compute applications

implement texture operations

memory requests

radar data

label

interconnect network

low-power mode

foregoing modules and/or devices

illustrative purposes only

object detection node

image data is received at a deep neural network (DNN)

more parallelism

GDDR5 SDRAM

predetermined percentage

video encoder

general computer system

MavLink

reduced confidence implementation

improved accuracy

astronomy

complete MAV system

less confident classification scheme

windshields

camera-centric depth map

present description

producer-consumer parallelism

raster engine

data dependency

floating point arithmetic logic units

DRAM

basic patterns

subsequent iterations

range

objects or patterns

concurrency

l1 caches

predetermined size

location

train tracks

forest flight experiments

method

hardware/software system

mirrors

LIDAR device

separate and distinct memory devices

pixel fragment

external devices

on-board processing

feature

system-on-a-chip

various raster operations

data storage and communication

magnetic tape drive

ROS

PCIe

camera

commands

wheels

complex problems

deep neural network

challenging problem

substantial power

other system memory

quality

classes

research facilities

interface

perform address translations

transfer learning

integrated computing device

more detail

Single-Error Correcting Double-Error Detecting

general purpose parallel computation configuration

inferencing

logic

middleware protocol

following ROS nodes

other embodiments

data bus width

extended Kalman filter

high-speed communication links

ground

Accelerated Graphics Port

various units

personalized user recommendations

efficiency and speed

orientation

telecommunications network

urban canyon

second DNN

operations

isolation

high framerate optical flow sensor

static objects

configuration

x,y coverage mask

two-dimensional (2D) image data

fixed function hardware units

architecture

single, simple construct

32 threads

switch

probability

vehicle systems

robotic systems

quadcopter or drone

higher dimensional matrix operations

MAV protocol

methodology

correct label

processor

shared memory/L1 cache

other neurons

video chat applications

signaling rate

term single semiconductor platform

pedestrians

depth data

photos

intelligent video analytics

sonar imaging device

computer-readable media

schedules

drug discovery

more classes

deep neural networks

processing tasks

automobiles

stores

active task pool

workstation

tablet computer

such messages

functional units

LIDAR

personal digital assistant

on-board, real-time processing

new inputs

reads and writes

big data analytics

texture unit

dedicated portion

matrix operations

communication protocol

other types

conventional bus implementation

cathode ray tube

dispatch unit

HBM2 stack

PPU

vision modules

nodes

computer control logic algorithms

DPCs

HBM2 memory stacks

buffer

software-in-the-loop

basic objects

processing cores

read/write

host processor

direct load/store/atomic access

memory pages

person

memories

survive clipping and culling

graphics data

fragment shader

providing

reciprocal square root

its steering commands

wireless, hand-held device

various tasks

firmware

SFUs

form

associated correct label

data corruption

image data

real-time object detection DNN

internet

autopilot software

high-accuracy speech

general-purpose computations

neural network model

pre-raster operations unit (PROP)

obstacles/objects

supercomputers

specifically

YOLO DNN

infrared imaging device

primitive engine

depth

programmers

16-bit floating point

different warps

memory protection

floating point cores

multiple GPUs and CPUs

smart-phone (e.g.

HBM2 memory interface

tens

features

secondary storage

cache

tasks

CUDA level

design flexibility

other hardware units

high-throughput conduit

16.times

controller module

indication

extended periods

thread blocks

television

movie recommendations

man-made trail

messaging bridge to external autopilot module

forest

financial modeling

High Performance Computing

global synchronization

child

expression

machine learning applications

half

amphibious vehicle

plane equations

protocol

well-known interfaces

high accuracy

conceptual diagram

exposes

their local context

tile coalescing engine

sub-block

texture and load/store operations

neural learning system

massive amounts

sample location

second layer

route packets

light detection

flight firmware

front

person, animal, etc

camera driver node

trail

deep neural network-based artificial intelligence

lateral position

obstacle data

system memory

accesses

continuous control

first layer

computing platform

motion

vehicle direction control

conjunction

architecture and/or functionality

road hazards

flight controller hardware autopilot

people or pets

even more precise state estimation

object detection

application

number of threads

cooperating threads

other bus

color

parallel

1024 bits

specific DNN training

latency

types

Parallel Processing Architecture

artificial intelligence computing

such computer programs

various combinations

visualization data

mobile phone device

group

yet another embodiment

purpose

matrices

digital imaging camera

increased connectivity

crossbar (Xbar)

vertex

load and store operations

rotation and translation data

multiple layers

page table

number of fixed function hardware units

attributes

Multiple-Data

examples

correct labels

same warp

low-flying MAVs

shape

pages

neural learning process

final few layers

processing devices

current location

steering angle

startup indicator

other sensor measurements

single unified virtual address space

load texture maps

trail detection

illustrative purposes

converted messages

task

more high-speed NVLink

CUDA

greater performance

M-Pipe Controller

thread block granularities

more stable control

SM

lines

warp

following claims

cache access latency

hierarchical tree data structure

physical addresses

Raster Operations (ROP)

supervised data

example

temporary storage

16-bit floating point input data

data

reduced instruction set computer

network

controller testing and debugging

embedded system

ever larger problems

32-bit floating point matrices

network interface

two texture units

forward propagation phase

from and/or writes

Single-Instruction

slots

memory locations

image

vehicles

environmental mapping

setup engine

IEEE 754-2008 standard

floating point

forest trails

entertainment purposes

software

1024-bit data buses

possible examples

certain weight

industries

data transfer rate

diverse use cases

sewer culvert

lateral offsets

data and/or commands

emergency override

execution state

real-time language translation

register file

device

other tasks

tensor cores

many threads

level one

DLL model

three nodes

precedence

functional unit

u memory interfaces

PCIe bus

crossbar

described above

location of the vehicle is controlled, utilizing the orientation of the vehicle with respect to the path and the lateral position of the vehicle with respect to the path

several instructions and data

other packets

SITL

trained neural network

i.e., texture maps

plasma display

program

objects

execution

real-time

various other units

attitude estimation

mixed precision processing unit

vertex shader program

flow sensor

level two

graphics

enormous amounts

32 double-precision

) function

safety mechanism

path

TLBs

large animal

boat

output

tire tracks

granularity

Multiple Thread

coarse raster engine

portion

random access memory

graphics processing

warp comprises

command stream

desktop computer

given input

other modules

third DNN

pair

System Description

middleware

200 memory

chipset

model

work distribution unit

human brain look

programming model

three numbers output

z-test

threads

six links

robotic operating system

various inputs

classify images

matrix multiply and accumulate

TFLOPS

radar device

optimizations

routing packets

recording device

training, data flows

entire grid

training

direction

multiple processors

automatic image captioning

copy engine

display devices

exemplary system

GPCs

storage capacity

mip-maps

infrared data

deep learning

occluded objects

DNN determines both an orientation of a vehicle with respect to a path and a lateral position of the vehicle with respect to the path, utilizing the image data

computed pose

collective group-wide function interfaces

avoidance

communication bus

pixel blending

various architecture and/or functionality

waypoints

exemplary software architecture

reliable metric position

communication and data transfer mechanisms

main memory

converted

multi-level memory hierarchy

results

hub

groups

SIMT

importance levels

labels

light emitting diode

workloads

determines

more NVLink

fragment

iGPU

tree traversal unit

special functions

higher level patterns

y memory devices

dynamic random access memory

floating point arithmetic

weather forecasting

vehicle systems module

CPU

autonomous path navigation

breadth and scope

weights

developers

PROP

tensor core

associated position

page faults

global memory

programmable streaming multiprocessor

instantiation

shapes

various functional units

Input/Output

flight stack

High-performance GPU-accelerated systems

ROS protocol

fragments

appropriate logical units

compute nodes

order

vehicle location controlling

units

applications

directions

accelerate numerous deep learning systems

integration

arithmetic logic unit

such a capability

graphics rendering pipeline

2D optical flow

pointers

streaming data

2D array

their equivalents

first format

active task

thread block

on-board computer

decoded commands

latency-sensitive process

memory accesses

DNN

Raster Operations

programmable streaming processor

visual odometry algorithm

training complex neural networks

carrier board

l2 cache

forward-facing camera

other units

calculation

accumulation matrices

transfer

various sections

die or chip

pictorial image

inference

response

LSUs

embodiment

streams

human brain

supervised classification network

translate speech

propagation

cooperative group

on-chip operation

fully-pipelined, single-precision, double-precision

cache hierarchy

inputs

l1 cache

detail

high-bandwidth memory

texture maps

very large datasets

best overall performance

various previous figures

neural network

lower level caches

parallel processing system

robust control

relative location

coherency operations

full precision product

process tasks

data paths

copy process

one trail

SLAM

waypoint computation

module

Machine Learning

blocks

translation

stereo image data

shared memory functionality

hardware units

other processors

texture

object detection DNN module

pointer

autonomous trail

perform calculations

driverless cars

vehicle’s

steering directions

path orientation

problem

vehicle location information

multiple DRAM

number

remaining capacity

video decoder

Exemplary Computing System

memory

predicted label

lines and angles

viewing frustum

interfaces

shader programs

such modules

human operator

object data

complex multi-layered networks

sampled texture values

robotics

teleoperation

half U

warp executes

hand-held electronic device

still another example

wireless network

view orientation

unique results

sonar and lidar

warp diverge

general purpose

combination

various features

reading and writing data

simultaneous localization and mapping

NVLinks

warps

forwarding commands

handwritten numbers

much simpler programming model

input data

state information

individual threads

DNN architecture

equal concurrency

such processor

MPC

vertex attributes

robot

environmentally aware autonomous trail navigation

low-latency access

manual user input

autonomous vehicle platforms

movement

computing pipeline

other inputs

neurons

color blending

compute applications

organize pixel data

Cooperative launch APIs support synchronization amongst thread blocks

game console system

information

ROP unit

hardware page faulting

current TrailNet DNN predictions

manual input device module

individual thread

SIMD

vehicle orientation

course

classification

lidar data

page tables

graphics raster operations

addresses

two dispatch units

virtual addresses

vertices

unified memory

area savings

reliability

messages

thread

over fifty million users

head

large number

suitable protocol

many connected perceptrons

errors

hydroplane

wide area network

joystick driver node

MAV configurations

local memory

environmental awareness

overhanging branches

assumptions

search-and-rescue

libraries and utility functions

online image databases

simpler configuration

16-bit floating point matrices

registers

available memory

same physical package

yet another example

faster drug development

manual override selection

power management unit

depth testing

frequency

circuit board system

instruction cache

label image data

micro aerial vehicle

support

safety

independent address spaces

Example Network Device


Drawings

Brief Description:

illustrates an example network device 100 in accordance with one embodiment.

Detailed Description:

Figure 1 illustrates an example network device 100 suitable for implementing the present invention. Network device 100 includes master central processing unit (CPU 104), interfaces 102, and bus 110 (e.g., a PCI bus). When acting under the control of appropriate software or firmware, CPU 104 is responsible for executing packet management, error detection, and/or routing functions, such as miscabling detection functions, for example. CPU 104 preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU 104 may include one or more processor(s) 163 such as a processor from the motorola family of microprocessors or the MIPS family of microprocessors. In an altetive embodiment, processor(s) 163 is specially designed hardware for controlling the operations of router 100. In a specific embodiment, memory 106 (such as non-volatile RAM and/or ROM) also forms part of CPU 104. However, there are many different ways in which memory could be coupled to the system

Interfaces 102 are typically provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the router 100. Among the interfaces that may be provided are ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may controlsuch communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications intensive tasks, these interfaces allow master microprocessorCPU 104 to efficiently perform routing computations, network diagnostics, security functions, etc. 

Although the system shown in Figure 1 is one specific network device of the present invention, it is by no means the only network device architecture on which the present invention can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc. is often used. Further, other types of interfaces and media could also be used with the router

Regardless of the network device’sconfiguration, it may employ one or more memories or memory modules (including memory 106) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. 

Brief Description:

illustrates a conventional system bus computing system architecture 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates a conventional system bus computing system architecture 200 wherein the components of the system are in electrical communication with each other using a bus 202. Example system 200 includes a processing unit (CPU or processor 204) and a systembus 202 that couples various system components including the system memory 214, such as read only memory (ROM 216) and random access memory (RAM 218), to the processor 204. The system 200 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor 204. The system 200 can copy data from the memory 214 and/or the storage device 220 to the cache 206 for quick access by the processor 204. In this way, the cache can provide a performance boost that avoids processor 204delays while waiting for data. These and other modules can control or be configured to control the processor 204 to perform various actions. Other system memory 214 may be available for use as well. The memory 214 can include multiple different types of memory with different performance characteristics. The processor 204 can include any general purpose processor and a hardware module or software module, such as module 1 222, module 2 224, and module 3 226 stored in storage device 220, configured to control the processor 204 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 204 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. 

To enable user interaction with the computing device 200, an input device 212 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motioninput, speech and so forth. An output device 210 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 200. The communications interface 240 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. 

Storage device 220 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAM 218), read only memory (ROM 216), and hybrids thereof. 

The storage device 220 can include software modules 222, 224, 226 for controlling the processor 204. Other hardware or software modules are contemplated. The storage device 220 can be connected to the systembus 202. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 204, bus 202, display 235, and so forth, to carry out the function

Brief Description:

illustrates a computer system 300 in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates a computer system 300 having a chipset architecture that can be used in executing the described method and generating and displaying a graphical user interface (GUI). computer system 300 is an example of computer hardware, software, and firmware that can be used to implement the disclosed technology. System 300 can include a processor 304, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor 304 can communicate with a chipset 302 that can controlinput to and output from processor 304. In this example, chipset 302 outputs information to output device 310, such as a display, and can read and write information to storage device 312, which can include magnetic media, and solid state media, for example. Chipset 302 can also read data from and write data to RAM 314. A bridge 308 for interfacing with a variety of user interface components 306 can be provided for interfacing with chipset 302. Such user interface components 306 can include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system 300 can come from any of a variety of sources, machine generated and/or human generated. 

Chipset 302 can also interface with one or more communication interfaces 208 that can have different physical interfaces. Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 304analyzing data stored in storage 312 or 314. Further, the machine can receive inputs from a user via user interface components 306 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 304.

It can be appreciated that example systems 200 and 300 can have more than one processor 204 or be part of a group or cluster of computing devices networked together to provide greater processing capability


Parts List

100

example network device

102

interfaces

104

CPU

106

memory

108

processor(s)

110

bus

200

conventional system bus computing system architecture

202

bus

204

processor

206

cache

208

communication interfaces

210

output device

212

input device

214

memory

216

ROM

218

RAM

220

storage device

222

module 1

224

module 2

226

module 3

300

computer system

302

chipset

304

processor

306

user interface components

308

bridge

310

output device

312

storage device

314

RAM


Terms/Definitions

magnetic media

output

appropriate functions

flash memory cards

random access memory (RAM)

processor(s)

datasets

multi-core processor

hardware

firmware arrangements

control

example

disclosed technology

registration

machine

graphical user interface

gesture or graphical input

close proximity

communication interfaces

input mechanisms

particular hardware arrangement

delays

particular function

interfaces and media

storage device

hardware module or software module

multiple types

computer

quick access

persons

read only memory (ROM)

digital versatile disks

touch-sensitive screen

network

module 2

mouse

ordinary skill

module 1

other system memory

memory

described method

output mechanisms

route optimization

input

general-purpose network operations

computer-readable medium

appropriate media

basic features

software instructions

high-speed memory

performance boost

ATM interfaces

actual processor design

cartridges

fast token ring interfaces

computer hardware

interfaces

computing device

electrical communication

display

bridge

multimodal systems

magnetic cassettes

restriction

all these functions

software component

user interaction

speech

multiple cores or processors

input device

general purpose processor

CPU or processor)

present invention

user interface components

routing computations

bus

MIPS family

non-volatile memory

wireless local area networks

function

program instructions

configuration

part

identified computations

error detection

architecture

interface cards

independent processors

touch detection and processing circuitry

data

connection

media control and management

ethernet interfaces

DSL interfaces

physical interface

system

solid state media

hybrids

different physical interfaces

motorola family

keyboard

more than one processor

master central processing unit (CPU)

user

physically and/or logically distinct resources

communications

cable interfaces

many different ways

miscabling detection functions

instances

variety

various actions

components

other types

communication

memory controller

token ring interfaces

applications

random access memories (RAMs)

solid state memory devices

only network device architecture

specific embodiment

independent processor

network device

one aspect

sending

more appropriate embodiment

ROM

association tables

other system embodiments

various system components

data packets

memory or memories

addition

frame relay interfaces

router

interfacing

skill

microphone

chipset architecture

operations

sources

HSSI interfaces

browsing functions

appropriate applications software

non-volatile RAM and/or ROM

RAM

firmware

storage

improved hardware

motion

multiple different types

analyzing data

roaming

specially designed hardware

mechanisms

methods

user input and system output

cases

broadband wireless networks

greater processing capability

master microprocessor

inputs

FDDI interfaces

example system

operation

POS interfaces

various very high-speed interfaces

such communication interfaces

illustrate example system

example network device

functions

chipset

computer system

completely self-contained computing system

such user interface components

altetive embodiment

hardware module

module 3

tables

module

different performance characteristics

other peripherals

routing functions

operating system and/or one or more applications

microprocessors

software modules

computing devices

appropriate software or firmware

network device’s

packet switching

number

hard disk

processor

only memory (ROM)

group or cluster

Gigabit Ethernet interfaces

separate processors

device

necessary hardware components

other hardware

communications intensive tasks

such communications intensive tasks

wireless interfaces

means

information

mobility binding

output device

software

volatile RAM

system memory

pointing device

network diagnostics

media

example systems

special-purpose processor

cache

packet management

present technology

processing unit

personal area networks

security functions

other modules

operating system

conventional system bus computing system architecture

ports

PCR


Drawings

Brief Description:

illustrates a reaction system 100 in accordance with one embodiment.

Detailed Description:

Referencing Figure 1, a reaction system 100 illustrates a set of initial conditions and quantities 124 for a quantitiative Polymerase Chain Reaction that includes reagent(s) 106 (e.g., polymerase, primers, probes, etc.,)  and a sample 118 (e.g., target DNA strand, template DNA strand, etc.,). In qPCR, the sample 118 may contain DNA strands that serves as a template during the amplication process. The sample 118 may under go through a sample preperation process prior to being combined with the reagent(s) 106.

In qPCR, the  initial conditions and quantities 124 may additionally include quantities 122 for the reagent(s) 106 and the sample 118, as well as supplemenatal information such as the location (e.g., reaction well, plate position, etc.,) where the reagent(s) 106 and the sample 118 where placed in a reaction vessel 102 (reaction site 116). Environmental conditions may also factor into as part of the initial conditions and quantities 124 including the temperature 108 and pressure 120 at the start of and during the course of the qPCR reaction as changes in temperature 108 and pressure 120 may affect volumetric measurements. 

When the reaction vessel 102 is provided the instrument 104 to start the qPCR reaction, the initial conditions and quantities 124 may be entered in or detcted by the instrument 104 and reported to an initial reaction condition database 110

During the reaction, the sample 118 is denatured during a high temperature phase of the reaction, separating the double stranded DNA in two complementary strands. High-temperature incubation is used to “melt” the double stranded DNA  into single strands and loosen the secondary structure in single-stranded DNA. The highest temperature that the DNA polymerase can withstand is typically used (usually 95C). The denaturation time can be increased if template guanine cytosine(GC) content is high.

An annealing phase follows the denaturing phase. During the annealing phase, complementary sequences have an opportunity to hybridize, so an appropriate temperature is used that is based on the calculated melting temperature (Tm) of the primers (typically this temperature is 5C below the Tm of the primer). During the annealing phase the primers and probes anneal to the single stranded DNA. The primers and probes anneal to specific complementary sequences of the single stranded DNA on either of the signle strands.  The primers attach to specific sites of the DNA identifying a start location for the polymerase, the probes anneal to a site downstream of the primers.  The probes may be utilized to identify a marker (e.g., gene, phenotype, microsatellite sequence, SNP) of interest 

Following the annealling phase, the reaction undergoes an extension/replication phase where the single strands of DNA are replicated. The extension/replication phase changes adjusts the temperature to 70–72C, as this is where the activity of the DNA polymerase is optimal, and primer extension occurs at rates of up to 100 bases per second. When an amplicon in real-time PCR is small, this step is often combined with the annealing step, using 60C as the temperature. During replication/extension phase, the primers indicate an attachment point for the polymerase to begin extending the single stranded DNA of nucleotides adjacent to the primer nucleotides to the template DNA forming a complementary sequence and releasing the fluorescent dyes/tag when the probes are cleaved by the polymerase. 

During the qPCR reaction, the instrument 104 may detect the fluorescent emissions for the fluorescent probes. The fluorescent emissions may correspond to the intensity of emitted light (fluorescence) as a function of the wavelength of the emitted light used to identify specific probes. The instrument 104 records these emssion or lackthereof as the results that of qPCR reaction and record this information in a reaction results database 112.

Indentifying optimal reactants and reactant conditions is important in improving the reaction efficiency and subsequently the accuracy of a real time (rt) PCR data. 

In a perfect scenario, each target copy in a PCR reaction will be copied at each cycle, doubling the number of full-length target molecules: this corresponds to 100% amplification efficiency. Variations in efficiency will be amplified as thermal cycling progresses. Thus, any deviation from 100% efficiency can result in potentially erroneous data.

One way to minimize efficiency bias is to amplify relatively short targets. Amplifying a 100 basepair (bp) region is much more likely to result in complete synthesis in a given cycle than, say, amplifying a 1,200 bp target. For this reason, real-time PCR target lengths are generally 60–200 bp. In addition, shorter amplicons are less affected by variations in template integrity. If nucleic acid samples are slightly degraded and the target sequence is long, upstream and downstream primers will be less likely to find their complementary sequence in the same DNA fragment.

Amplicon GC content and secondary structure can be another cause of data inaccuracy. Less-than-perfect target doubling at each cycle is more likely to occur if secondary structure obstructs the path of the DNA polymerase. Ideally, primers should be designed to anneal with, and to amplify, a region of medium (50%) GC content with no significant GC stretches. For amplifying cDNA, it is best to locate amplicons near the 3ʹ ends of transcripts. If RNA secondary structure prohibits full-length cDNA synthesis in a percentage of the transcripts, these amplicons are less likely to be impacted.

Target specificity is another important factor in data accuracy. When designing real-time PCR primers, check primers to be sure that their binding sites are unique in the genome. This reduces the possibility that the primers could amplify similar sequences elsewhere in the sample genome. Primer design software programs automate the process of screening target sequences against the originating genome and masking homologous areas, thus eliminating primer designs in these locations.

Genomic DNA(gDNA), pseudogenes, and allelic variants needed to be factored into consideration when considering different primer and amplicon designs. 

gDNA carryover in an RNA sample may be a concern when measuring gene expression levels. The gDNA may be co-amplified with the target transcripts of interest, resulting in invalid data. gDNA contamination is detected by setting up control reactions that do not contain reverse transcriptase (no-RT controls); if the Ct for the no-RT control is higher than the Ct generated by the most dilute target, it indicates that gDNA is not contributing to signal generation. However, gDNA can compromise the efficiency of the reaction due to competition for reaction components such as dNTPs and primers.

The best method for avoiding gDNA interference in realtime PCR is thoughtful primer (or primer/probe) design, which takes advantage of the introns present in gDNA that are absent in mRNA. Whenever possible, Applied Biosystems™ TaqMan™ Gene Expression Assays are designed so that the TaqMan probe spans an exonexon boundary. Primer sets for SYBR Green dye–based detection should be designed to anneal in adjacent exons or with one of the primers spanning an exon/exon junction. When upstream and downstream PCR primers anneal within the same exon, they can amplify target from both DNA and RNA. Conversely, when primers anneal in adjacent exons, only cDNA will be amplified in most cases, because the amplicon from gDNA would include intron sequence, resulting in an amplicon that is too long to amplify efficiently in the conditions used for real-time PCR.

Pseudogenes, or silent genes, are other transcript variants to consider when designing primers. These are derivatives of existing genes that have become nonfunctional due to mutations and/or rearrangements in the promoter or gene itself. Primer design software programs can perform BLAST™ searches to avoid pseudogenes and their mRNA products.

Allelic variants are two or more unique forms of a gene that occupy the same chromosomal locus. Transcripts originating from these variants can vary by one or more mutations. Allelic variants should be considered when designing primers, depending on whether one or more variants are being studied. In addition, GC-content differences between variants may alter amplification efficiencies and generate separate peaks on a melt curve, which can be incorrectly diagnosed as off-target amplification. Alternately spliced variants should also be considered when designing primers.

Specificity, dimerization, and self-folding in primers and probes are another set of conditions that needed to be accounted for when considering different designs of a primers and amplicons.

Primer-dimers are most often caused by an interaction between forward and reverse primers, but can also be the result of forward-forward or reverse-reverse primer annealing, or a single primer folding upon itself. Primerdimers are of greater concern in more complex reactions such as multiplex real-time PCR. If the dimerization occurs in a staggered manner, as often is the case, some extension can occur, resulting in products that approach the size of the intended amplicon and become more abundant as cycling progresses. Typically, the lower the amount of target at the start of the PCR, the more likely primer-dimer formation will be. The positive side of this potential problem is that the interaction of primer-dimers is usually less favorable than the intended primer-template interaction, and there are many ways to minimize or eliminate this phenomenon.

The main concern with primer-dimers is that they may cause false-positive results. This is of particular concern with reactions that use DNA-binding dyes such as SYBR Green I dye. Another problem is that the resulting competition for reaction components can contribute to a reaction efficiency outside the desirable range of 90–110%. The last major concern, also related to efficiency, is that the dynamic range of the reaction may shrink, impacting reaction sensitivity. Even if signal is not generated from the primer-dimers themselves (as is the case with TaqMan Assays), efficiency and dynamic range may still be affected.

Several free software programs are available to analyze real-time PCR primer designs and determine if they will be prone to dimerize or fold upon themselves. The AutoDimer software program (authored by P.M. Vallone, National Institute of Standards and Technology, USA) is a bioinformatics tool that can analyze a full list of primers at the same time. This is especially helpful with multiplexing applications. However, while bioinformatics analysis of primer sequences can greatly minimize the risk of dimer formation, it is still necessary to monitor dimerization experimentally.

The traditional method of screening for primer-dimers is gel electrophoresis. Primer-dimers appear as diffuse, smudgy bands near the bottom of the gel. One concern with gel validation is that it is not very sensitive and therefore may be inconclusive. However, gel analysis is useful for validating data obtained from a melting/ dissociation curve, which is considered the best method for detecting primer-dimers.

Melting or dissociation curves should be generated following any real-time PCR run that uses DNA-binding dyes for detection. In brief, the instrument ramps from low temperature, in which DNA is double-stranded and fluorescence is high, to high temperature, which denatures DNA and results in lower fluorescence. A sharp decrease in fluorescence will be observed at the Tm for each product generated during the PCR. The melting curve peak obtained for the no-template control can be compared to the peak obtained from the target to determine whether primer-dimers are present in the reaction.

Ideally, a single distinct peak should be observed for each reaction containing template, and no peaks should be present in the no-template controls. Smaller, broader peaks at a lower melting temperature than that of the desired amplicon and also appearing in the no-template control reactions are quite often dimers. Again, gel runs of product can often validate the size of the product corresponding to the melting peak.

There are situations in which primer-dimers are present, but they may not affect the overall accuracy of the realtime PCR assay. A common observation is that primerdimers are present in the no-template control but do not appear in reactions containing template DNA. This is not surprising because in the absence of template, primers are much more likely to interact with each other. When template is present, primer-dimer formation is not favored.

As long as the peak seen in the no-template control is absent in the plus-template dissociation curve, primerdimers are not an issue.

Primer-dimers are part of a broad category of nonspecific PCR products that includes amplicons created when a primer anneals to an unexpected location with an imperfect match. Amplification of nonspecific products is of concern because they can contribute to fluorescence, which in turn artificially shifts the Ct of the reaction. They can influence reaction efficiency through competition for reaction components, resulting in a decreased dynamic range and decreased data accuracy. Nonspecific products are an even greater concern in absolute quantification assays, in which precise copy numbers are reported.

Standard gel electrophoresis is generally the first step in any analysis of real-time PCR specificity. While it can help to identify products that differ in size from a target amplicon, a band may still mask similar-sized amplicons and have limited sensitivity. Due to its accuracy and sensitivity, melting curve analysis provides the most confidence in confirming gel electrophoretic assessment of primer specificity.

While nonspecific amplification should always be eliminated when possible, there are some cases in which the presence of these secondary products is not a major concern. For example, if alternate isoforms or multiple alleles that differ in GC content are knowingly targeted, multiple products are expected.

When considering the design of certain Primers, the following following software options may be useful such as Applied Biosystems™ Primer Express™ Software, Invitrogen™ OligoPerfect™ Designer web-based tool, and Invitrogen™ Vector NTI™ Software. 

These programs can automatically design primers for specific genes or target sequences using algorithms that incorporate the following guidelines and can also perform genome-wide BLAST searches for known sequence homologies.

• In general, design primers that are 18–28 nucleotides in length

• Avoid stretches of repeated nucleotides

• Aim for 50% GC content, which helps to prevent mismatch stabilization

• Choose primers that have compatible Tm values (within 1°C of each other)

• Avoid sequence complementarity between all primers employed in an assay and within each primer

These considerations may be important to improve the effiiency of the system but may require the additional analysis of the initial conditions and quantities 124 in comparison with the reaction results 114 of a plurality of similar reaction sets to identify and predict possible changes to improve the efficiency of other PCR reactions. 

Brief Description:

illustrates a PCR process 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates a PCR process 200 in accordance with one embodiment. Denaturation is the process of separating the two hydrogen-bonded complementary chains of DNA into a pair of single stranded polynucleotide molecules by a process of heating (e.g., to 94°C to 96°C). Annealing (primer binding) involves lowering the temperature of the mixture (e.g., to 45-60 °C) so that the primers can attach themselves to target regions of the single-stranded DNA strands. The primers are oligonucleotides selected to bind to the target regions specifically. Extension is the process of growing new DNA strands using the polymerase as a catalyst for incorporating sequences of the dNTPs onto the attached primers. The newly formed double strands around the target region are then denatured and the cycle is repeated.

The primers may include probes. A probe comprises a molecule (referred to as a reporter molecule) that gives off a signal under certain conditions. Probes are usually primers with an additional group comprising the reporter molecule.  This reporter molecule can be a molecule that fluoresces and shines when hit with light, or it can be an attachment to a colored bead, or it can emit radiation, for example.  

The signal given off by a probe can be used to detect the PCR-amplified products, which are referred to as amplicons. Based on the nature of the reporter molecule used, the probe generates radioactive, colorimetric, fluorometric, or chemiluminescent signals. Probes are useful to enable visualization of the PCR products and to provide specificity by ensuring that the amplicon is the target sequence of interest and not the result of non-specific amplification. In some cases a simple gel electrophoresis (EC) process is sufficient to confirm the presence of specific amplicons.

Brief Description:

illustrates the use of a PCR system 300 in PCR in accordance with one embodiment. 

Detailed Description:

Figure 3 illustrates a PCR system 300 in accordance with one embodiment. The PCR system 300 comprises an array 302 of reaction sites 304 to which the components of the PCR reaction are added in different combinations. The array 302 is located in a thermocycler 306 to drive the PCR reaction heating cycles. As the reaction progresses, probes comprised by the primers fluoresce, and these emissions are detected by a photodector 308 and provided in real-time to an analysis system 310.


Parts List

100

reaction system

102

reaction vessel

104

instrument

106

reagent(s)

108

temperature

110

initial reaction condition database

112

reaction results database

114

results

116

reaction site

118

sample

120

pressure

122

quantities

124

initial conditions and quantities

200

PCR process

202

DNA to replicate

204

nucleotide

206

item

300

PCR system

302

array

304

reaction sites

306

thermocycler

308

photodector

310

analysis system


Terms/Definitions

RT-PCR

reverse transcription polymerase chain reaction, a variant of polymerase chain reaction (PCR), a technique commonly used to detect RNA expression.RT-PCR is not to be confused with real-time polymerase chain reaction (qPCR). RT-PCR is used to qualitatively detect gene expression through the creation of complementary DNA (cDNA) transcripts from RNA. A common application of PCR is the study of patterns of gene expression. Tissues (or even individual cells) can be analyzed at different stages to see which genes have become active, or which have been switched off. This application can also use quantitative PCR to quantitate the actual levels of expression. qPCR is used to quantitatively measure the amplification of DNA using fluorescent dyes. qPCR is also referred to as quantitative PCR, quantitative real-time PCR, and real-time quantitative PCR. Although RT-PCR and the traditional PCR both produce multiple copies of particular DNA isolates through amplification, the applications of the two techniques are fundamentally different. Traditional PCR is used to exponentially amplify target DNA sequences. RT-PCR is used to clone expressed genes by reverse transcribing the RNA of interest into its DNA complement (cDNA) through the use of reverse transcriptase. Subsequently, the newly synthesized cDNA is amplified using traditional PCR.

cDNA

complementary DNA, DNA synthesized from a single stranded RNA (e.g., messenger RNA (mRNA) or microRNA) template in a reaction catalyzed by the enzyme reverse transcriptase.

mPCR

multiplex polymerase chain reaction, the use of polymerase chain reaction to amplify several different DNA sequences simultaneously (as if performing many separate PCR reactions all together in one reaction). This process amplifies DNA in samples using multiple primers and a temperature-mediated DNA polymerase in a thermal cycler. The primer design for all primers pairs has to be optimized so that all primer pairs can work at the same annealing temperature during PCR. Multiplex-PCR consists of multiple primer sets within a single PCR mixture to produce amplicons of varying sizes that are specific to different DNA sequences. By targeting multiple sequences at once, additional information may be gained from a single test run that otherwise would require several times the reagents and more time to perform. Annealing temperatures for each of the primer sets must be optimized to work correctly within a single reaction, and amplicon sizes, i.e., their base pair length, should be different enough to form distinct bands when visualized by gel electrophoresis. Alternatively, if amplicon sizes overlap, the different amplicons may be differentiated and visualised using primers that have been dyed with different colour fluorescent dyes.

5′ end

the “five prime end”, the end of the DNA or RNA strand that has the fifth carbon in the sugar-ring of the deoxyribose or ribose at its terminus. A phosphate group attached to the 5′-end permits ligation of two nucleotides, i.e., the covalent binding of a 5′-phosphate to the 3′-hydroxyl group of another nucleotide, to form a phosphodiester bond. Removal of the 5′-phosphate prevents ligation.

Assay

an analytic procedure in molecular biology for qualitatively assessing or quantitatively measuring the presence, amount, or functional activity of a target entity (the analyte).

3′ end

“three prime end” of a DNA or RNA strand, terminating at the hydroxyl group of the third carbon in the sugar-ring, and is known as the tail end. The 3′-hydroxyl is necessary in the synthesis of new nucleic acid molecules as it is ligated (joined) to the 5′-phosphate of a separate nucleotide, allowing the formation of strands of linked nucleotides.

nucleic acid directionality

the end-to-end chemical orientation of a single strand of nucleic acid. In a single strand of DNA or RNA, the chemical convention of naming carbon atoms in the nucleotide sugar-ring means that there will be a 5′-end, which frequently contains a phosphate group attached to the 5′ carbon of the ribose ring, and a 3′-end (usually pronounced “five prime end” and “three prime end”), which typically is unmodified from the ribose -OH substituent. In a DNA double helix, the strands run in opposite directions to permit base pairing between them, which is essential for replication or transcription of the encoded information. The relative positions of structures along a strand of nucleic acid, including genes and various protein binding sites, are usually noted as being either upstream (towards the 5′-end) or downstream (towards the 3′-end). (See also upstream and downstream.)

Reverse transcriptase

reverse transcriptase (RT) is an enzyme used to generate complementary DNA (cDNA) from an RNA template, a process termed reverse transcription.

Genotyping

determining differences in the genetic make-up (genotype) of an individual by examining the individual’s DNA sequence using biological assays and comparing it to another individual’s sequence or a reference sequence. It reveals the alleles an individual has inherited from their parents.

Electrophoresis


Drawings

Brief Description:

illustrates a CE device 100 in accordance with one embodiment.

Detailed Description:

Referring to Figure 1, a CE device 100 in one embodiment comprises a voltage bias source 102, a capillary 104, a body 114, a detector 106, a sample injection port 108, a heater 110, and a separation media 112. A sample is injected into the sample injection port 108, which is maintained at an above-ambient temperature by the heater 110. Once injected the sample engages the separation media 112 and is split into component molecules. The components migrate through the capillary 104 under the influence of an electric field established by the voltage bias source 102, until they reach the detector 106.

Brief Description:

illustrates a CE device 200 in accordance with one embodiment.

Detailed Description:

Referring to Figure 2, a CE device 200, in one embodiment,  comprises a voltage bias source 202, a capillary 204, a body 214, a detector 206, a sample injection port 208, a heater 210, and a separation media 212. A sample is injected into the sample injection port 208, which is maintained at an above-ambient temperature by the heater 210. Once injected the sample engages the separation media 212 and is split into component molecules. The components migrate through the capillary 204 under the influence of an electric field established by the voltage bias source 202, until they reach the detector 206. The CE device 200 may be a component of an instrument 216 that include a computational device to collect and process image signals from the detector. The instrument 216 may be a capillary electrophoresis genetic analyzer providing many similar features found in exemplary commercial CE instruments

Brief Description:

illustrates a CE system 300 in accordance with one embodiment.

Detailed Description:

Referencing Figure 3, a CE system 300 in one embodiment comprises a source buffer 316 initially comprising the fluorescently labeled sample 318, a capilary 320, a destination buffer 324, a power supply 326, a computing device 302 comprising a processor 308memory 306 comprising basecaller algorithm 304, and a controller 310. The source buffer 316 is in fluid communication with the destination buffer 324 by way of the capilary 320. The power supply 326 applies voltage to the source buffer 316 and the destination buffer 324 generating a voltage bias through an anode 328 in the source buffer 316 and a cathode 330 in the destination buffer 324. The voltage applied by the power supply 326 is configured by a controller 310 operated by the computing device 302. The fluorescently labeled sample 318 near the source buffer 316 is pulled through the capilary 320 by the voltage gradient and optically labeled nucleotides of the DNA fragments within the sample are detected as they pass through an optical sensor 322. Differently sized DNA fragments within the fluorescently labeled sample 318 are pulled through the capilary at different times due to their size. The optical sensor 322 detects the fluorescent labels on the nucleotides as an image signal and communicates the image signal to the to the computing device 302. The computing device 302  aggregates the image signal as sample data and utilizes a basecaller algorithm 304 stored in memory 306 to transform the sample data into processed data and generate an electropherogram 314 to be shown in a display device 312.

Brief Description:

illustrates a CE process 400 in accordance with one embodiment.

Detailed Description:

Referencing Figure 4, a CE process 400 involves a computing device 412 communicating a configuration control 416 to a controller 408 to control the voltage applied by a power supply 406 to the buffers 402. After the prepared flourscently labled sample has been added to the source buffer, the controller 408 communicates an operation control 418 to the power supply 406 to apply a voltage 420 to the buffers creating a voltage bias/electrical gradient. The applied voltage cause the  fluorescently labled sample 422 to move through capilary 404 between the buffers 402 and pass by the optical sensor 410. The optical sensor 410 detects fluorescent labels on the nucleotides of the DNA fragments that pass through the capilary and communicates the image signal 424 to the computing device 412. The computing device 412 aggregates the image signal 424 to generate the sample data for further processing. A basecaller algorithm processes the sample data (e.g., signal values) to generate processed data. The computing device 412 then generates a display control 426 to display an electropherogram of the processed data on a display device 414

Brief Description:

illustrates a CE process 500 in accordance with one embodiment.

Detailed Description:

Referencing Figure 5, a CE process 500 involves configuring a capillary electrophoresis instrument operating parameters to sequence at least one fluorescently labeled sample (block 502). The configuration of the insturment may include creating or importing a plate setting for running a series of samples and assigning labels to the plate samples to assist in the processing of the collected imaging data. The process may also include communicating configuration controls to a controller to start applying voltage at a predetermined time. In block 504, the CE process 500 loads the fluorescently labled sample into the instrument. After the sample is loaded into the insturment, the insturment may transfer the sample from a plate well into the capilary tube and then position the capilary tube into the starting buffer at the beginning of the capilary electrophoresis process. In block 506, the CE process 500 begins the insturment run after the sample has been loaded into the capilary by applying a voltage to the buffer solutions postioned at opposite ends of the capilary, forming an electrical gradient to transport DNA fragments of the fluorescently labeled sample fromt the starting buffer to a destination buffer and traversing an optical sensor. In block 508, the CE process 500 detects the individual  fluorescent signals on the nucleotides of the DNA fragments as they move towards the destination buffer through the optical sensor and communicates the image signal to the computing device. In block 510, the CE process 500 aggregates the image signal at the computing device from the optical sensor and generates sample data that corresponds to the fluorescent intensity of the nucleotides DNA fragments. In block 512, the CE process 500 processes the sample data to identify the bases called in the DNA fragments at the particular time point. In block 514, the CE process 500 displays processed data through an electrpherogram through a display device. 

Brief Description:

illustrates a sequencing data 600 in accordance with one embodiment.

Detailed Description:

Referencing Figure 6,  sequencing data 600 shows Sanger sequencing data 602 in a electropherogram. Capillary electrophoresis technologies (CE) can examine several varieties of biopolymer; for example, DNA, methylated DNA, mRNA, proteins tagged with variable length oligos.  The resulting data appears as a series of peaks.  Minor variations in a biopolymer appear as smaller peaks, possibly overlapped with the peaks corresponding to the dominant form of the biopolymer.  The smaller peaks can be confused with peaks that arise from biochemical noise associated with the chemical reactions used to process the biological sample. With the Sanger sequencing data 602, a number of peaks 604 are seen at 116 nucleotide position corresponding to a cytosine. Thorough exisitng analysis techniques that involve examining the characteristics of the peaks such as peak height and width and data maninpulation 608,  the Sanger sequencing data 602 may be presented as adjusted data 610 showing the peaks 604 as a detected variant 606 for the 116 nucleotide position. 

Current solutions, such as commerically available variant detection software (e.g., Thermo Fisher Minor variant Finder, Multilocus variant Analysis, variant Reporter Software v1.1, variant analysis and identitifcation modules in MicrobeBridge Software, etc.,) to recognize the smaller peaks that are associated with the minor biopolymeric forms present in a sample involve examining the characteristics of the peaks such as peak height and width. 

The Sanger sequencing data 602 and the adjusted data 610, as well as the analysis process utilized to identify the peak characteristics, may be stored within sequencing data storage to be used in the statistical analysis of similar results or applied as inputs to machine learning algorithms to distinguish dominant from minor variants of biomolecules. 


Parts List

100

CE device

102

voltage bias source

104

capillary

106

detector

108

sample injection port

110

heater

112

separation media

114

body

200

CE device

202

voltage bias source

204

capillary

206

detector

208

sample injection port

210

heater

212

separation media

214

body

216

instrument

300

CE system

302

computing device

304

basecaller algorithm

306

memory

308

processor

310

controller

312

display device

314

electropherogram

316

source buffer

318

fluorescently labeled sample

320

capilary

322

optical sensor

324

destination buffer

326

power supply

328

anode

330

cathode

400

CE process

402

buffers

404

capilary

406

power supply

408

controller

410

optical sensor

412

computing device

414

display device

416

configuration control

418

operation control

420

422

fluorescently labled sample

424

image signal

426

display control

500

CE process

502

block

504

block

506

block

508

block

510

block

512

block

514

block

600

sequencing data

602

Sanger sequencing data

604

peaks

606

detected variant

608

data maninpulation

610

adjusted data


Terms/Definitions

average peak

sample data

the output of a single lane or capillary on a sequencing instrument. Sample data is entered into Sequencing Analysis, SeqScape, and other sequencing analysis software.

plasmid

a genetic structure in a cell that can replicate independently of the chromosomes, typically a small circular DNA strand in the cytoplasm of a bacterium or protozoan. Plasmids are much used in the laboratory manipulation of genes.

polymerase

an enzyme that catalyzes polymerization. DNA and RNA polymerases build single‐stranded DNA or RNA (respectively) from free nucleotides, using another single‐stranded DNA or RNA as the template.

mixed base

One-base positions that contain 2, 3, or 4 bases. These bases are assigned the appropriate IUB code.

noise

average background fluorescent intensity for each dye.

capillary electrophoresis genetic analyzer

insturment that applies an electrical field to a capilary loaded with a sample so that the negatively charged DNA fragments move toward the positive electrode. The speed at which a DNA fragment moves through the medium is inversely proportional to its molecular weight. This process of electrophoresis can separate the extension products by size at a resolution of one base.

raw data

a multicolor graph displaying the fluorescence intensity (signal) collected for each of the four fluorescent dyes.

basecall

then assigning a nucleotide base to each peak (A, C, G, T, or N) of the fluorescence signal.

primer

A short single strand of DNA that serves as the priming site for DNA polymerase in a PCR reaction.

amplicon

the product of a PCR reaction. Typically, an amplicon is a short piece of DNA.

variant

bases where the consensus sequence differs from the reference sequence that is provided.

base pair

complementary nucleotide in a DNA sequence. Thymine (T) is complementary to adenine (A) and guanine (G) is complementary to cytosine (C).

3′ end

single nucleotide polymorphism

a variation in a single base pair in a DNA sequence.

quality values

an estimate (or prediction) of the likelihood that a given basecall is in error. Typically, the quality value is scaled following the convention established by the phred program: QV = –10 log10(Pe), where Pe stands for the estimated probability that the call is in error. Quality values are a measure of the certainty of the base calling and consensus-calling algorithms. Higher values correspond to lower chance of algorithm error. Sample quality values refer to the perbase quality values for a sample, and consensus quality values are per-consensus quality values.

width curve

heterozygous insertion deletion variant

see single nucleotide polymorphism

average peak width

Sanger Sequencer

a DNA sequencing process that takes advantage of the ability of DNA polymerase to incorporate 2´,3´-dideoxynucleotides—nucleotide base analogs that lack the 3´-hydroxyl group essential in phosphodiester bond formation. Sanger dideoxy sequencing requires a DNA template, a sequencing primer, DNA polymerase, deoxynucleotides (dNTPs), dideoxynucleotides (ddNTPs), and reaction buffer. Four separate reactions are set up, each containing radioactively labeled nucleotides and either ddA, ddC, ddG, or ddT. The annealing, labeling, and termination steps are performed on separate heat blocks. DNA synthesis is performed at 37°C, the temperature at which DNA polymerase has the optimal enzyme activity. DNA polymerase adds a deoxynucleotide or the corresponding 2´,3´-dideoxynucleotide at each step of chain extension. Whether a deoxynucleotide or a dideoxynucleotide is added depends on the relative concentration of both molecules. When a deoxynucleotide (A, C, G, or T) is added to the 3´ end, chain extension can continue. However, when a dideoxynucleotide (ddA, ddC, ddG, or ddT) is added to the 3´ end, chain extension 4 DNA Sequencing by Capillary terminates . Sanger dideoxy sequencing results in the formation of extension products of various lengths terminated with dideoxynucleotides at the 3´ end.

mobility shift

electrophoretic mobility changes imposed by the presence of different fluorescent dye molecules associated with differently labeled reaction extension products.

spacing curve

5′ end

image signal

a number that indicates the intensity of the fluorescence from one of the dyes used to identify bases during a data run. Signal strength numbers are shown in the Annotation view of the sample file.

n-1 peak

pure base

assignment mode for a base caller, where the base caller determines an A, C, G, and T to a position instead of a variable.

polymerase slippage

is a form of mutation that leads to either a trinucleotide or dinucleotide expansion or contraction during DNA replication. A slippage event normally occurs when a sequence of repetitive nucleotides (tandem repeats) are found at the site of replication. Tandem repeats are unstable regions of the genome where frequent insertions and deletions of nucleotides can take place.

relative fluoresce unit

measurements in electrophoresis methods, such as for DNA analysis. A “relative fluorescence unit” is a unit of measurement used in analysis which employs fluorescence detection.

base spacing

the number of data points from one peak to the next. A negative spacing value or a spacing value shown in red indicates a problem with your samples, and/or the analysis parameters.

Exemplary commercial CE devices

include the Applied Biosystems, Inc. (ABI) genetic analyzer models 310 (single capillary), 3130 (4 capillary), 3130xL (16 capillary), 3500 (8 capillary), 3500xL (24 capillary), 3730 (48 capillary), and 3730xL (96 capillary), the Agilent 7100 device, Prince Technologies, Inc.’s PrinCE™ Capillary Electrophoresis System, Lumex, Inc.’s Capel-105™ CE system, and Beckman Coulter’s P/ACE™ MDQ systems, among others.

separation or sieving media

include gels, however non-gel liquid polymers such as linear polyacrylamide, hydroxyalkylcellulose (HEC), agarose, and cellulose acetate, and the like can be used. Other separation media that can be used for capillary electrophoresis include, but are not limited to, water soluble polymers such as poly(N,N′-dimethylacrylamide)(PDMA), polyethylene glycol (PEG), poly(vinylpyrrolidone) (PVP), polyethylene oxide, polysaccharides and pluronic polyols; various poly(vinylalcohol) (PVAL)-related polymers, polyether-water mixture, lyotropic polymer liquid crystals, among others.

beam search

a heuristic search algorithm that explores a graph by expanding the most promising node in a limited set. Beam search is an optimization of best-first search that reduces its memory requirements. Best-first search is a graph search which orders all partial solutions (states) according to some heuristic. But in beam search, only a predetermined number of best partial solutions are kept as candidates.[1] It is thus a greedy algorithm. Beam search uses breadth-first search to build its search tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost.[2] However, it only stores a predetermined number, β, of best states at each level (called the beam width). Only those states are expanded next. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search. The beam width bounds the memory required to perform the search. Since a goal state could potentially be pruned, beam search sacrifices completeness (the guarantee that an algorithm will terminate with a solution, if one exists). Beam search is not optimal (that is, there is no guarantee that it will find the best solution). In general, beam search returns the first solution found. Beam search for machine translation is a different case: once reaching the configured maximum search depth (i.e. translation length), the algorithm will evaluate the solutions found during search at various depths and return the best one (the one with the highest probability). The beam width can either be fixed or variable. One approach that uses a variable beam width starts with the width at a minimum. If no solution is found, the beam is widened and the procedure is repeated.

DSN and Computing Core


Drawings

Brief Description:

is a schematic block diagram of an embodiment of a dispersed or distributed storagenetwork (DSN) in accordance with the present invention; 

Detailed Description:

Figure 1 is a schematic block diagram of an embodiment of a dispersed, or distributed, storagenetwork DSN 100 that includes a plurality of computing devices 102, a managing unit 122, an integrity processing unit 114, and a DSN memory 116. The components of the DSN 100 are coupled to a network112, which may include one or more wireless and/or wire lined communication systems; one or more non-public intranet systems and/or public internet systems; and/or one or more local area networks (LAN) and/or wide area networks (WAN). 

The DSN memory 116 includes a plurality of storage units 118 that may be located at geographically different sites (e.g., one in chicago, one in milwaukee, etc.), at a common site, or a combination thereof. For example, if the DSN memory 116 includes eight storage units 118, each storage unit is located at a different site. As another example, if the DSN memory 116 includes eight storage units 118, all eight storage units are located at the same site. As yet another example, if the DSN memory 116 includes eight storage units 118, a first pair of storage units are at a first common site, a second pair of storage units are at a second common site, a third pair of storage units are at a third common site, and a fourth pair of storage units are at a fourth common site. Note that a DSN memory 116 may include more or less than eight storage units 118. Further note that each storage unit 118 includes a computing core (as shown in Figure 2, or components thereof) and a plurality of memory devices for storing dispersed error encoded data

Each of the computing devices 102, the managing unit 122, and the integrity processing unit 114 include a computing core 104, which includes network interfaces 108. Computing devices 102 may each be a portable computing device and/or a fixed computing device. A portable computing device may be a social networking device, a gaming device, a cell phone, a smart phone, a digital assistant, a digital music player, a digital video player, a laptop computer, a handheld computer, a tablet, a video game controller, and/or any other portable device that includes a computing core. A fixed computing device may be a computer (PC), a computer server, a cable set-top box, a satellite receiver, a television set, a printer, a fax machine, home entertainment equipment, a video game console, and/or any type of home or office computing equipment. Note that each of the managing unit 122 and the integrity processing unit 114 may be separate computing devices, may be a common computing device, and/or may be integrated into one or more of the computing devices 102 and/or into one or more of the storage units 118

Each interface 108, 108, and 108 includes software and hardware to support one or more communication links via the network112 indirectly and/or directly. For example, interface 108 supports a communication link (e.g., wired, wireless, direct, via a LAN, via the network112, etc.) between computing devices 14 and 102. As another example, interface 108 supports communication links (e.g., a wired connection, a wireless connection, a LAN connection, and/or any other type of connection to/from the network112) between computing devices 102 and 102 and the DSN memory 116. As yet another example, interface 108 supports a communication link for each of the managing unit 122 and the integrity processing unit 114 to the network112

Computing devices 102 and 102 include a dispersed storage (DS) client module 106, which enables the computing device to dispersed storage error encode and decode data (e.g., data 110) as subsequently described. In this example embodiment, computing device 102 functions as a dispersed storage processing agent for computing device 14. In this role, computing device 102 dispersed storage error encodes and decodes data on behalf of computing device 14. With the use of dispersed storage error encoding and decoding, the DSN 100 is tolerant of a significant number of storage unit failures (the number of failures is based on parameters of the dispersed storage error encoding function) without loss of data and without the need for a redundant or backup copies of the data. Further, the DSN 100 storesdata for an indefinite period of time without data loss and in a secure manner (e.g., the system is very resistant to unauthorized attempts at accessing the data). 

In operation, the managing unit 122 performs DS management services. For example, the managing unit 122 establishes distributed data storage parameters (e.g., vault creation, distributed storage parameters, security parameters, billing information, user profile information, etc.) for computing devices 102 individually or as part of a group of user devices. As a specific example, the managing unit 122 coordinates creation of a vault (e.g., a virtual memory block associated with a portion of an overall namespace of the DSN) within the DSN memory 116 for a user device, a group of devices, or for public access and establishes per vault dispersed storage (DS) error encoding parameters for a vault. The managing unit 122 facilitates storage of DS error encoding parameters for each vault by updating registry information of the DSN 100, where the registry information may be stored in the DSN memory 116, a computing device 102, the managing unit 122, and/or the integrity processing unit 114

The managing unit 122 creates and stores user profile information (e.g., an access control list (ACL)) in local memory and/or within memory of the DSN memory 116. The user profile information includes authentication information, permissions, and/or the security parameters. The security parameters may include encryption/decryption scheme, one or more encryption keys, key generation scheme, and/or data encoding/decoding scheme

The managing unit 122 creates billing information for a particular user, a user group, a vault access, public vault access, etc. For instance, the managing unit 122 tracks the number of times a user accesses a non-public vault and/or public vaults, which can be used to generate a per-access billing information. In another instance, the managing unit 122 tracks the amount of data stored and/or retrieved by a user device and/or a user group, which can be used to generate a per-data-amount billing information

As another example, the managing unit 122 performs network operations, network administration, and/or network maintenance. Network operations includes authenticating user data allocation requests (e.g., read and/or write requests), managing creation of vaults, establishing authentication credentials for user devices, adding/deleting components (e.g., user devices, storage units, and/or computing devices with a DS client module 106) to/from the DSN 100, and/or establishing authentication credentials for the storage units 118. Network administration includes monitoring devices and/or units for failures, maintaining vault information, determining device and/or unit activation status, determining device and/or unit loading, and/or determining any other system level operation that affects the performance level of the DSN 100. Network maintenance includes facilitating replacing, upgrading, repairing, and/or expanding a device and/or unit of the DSN 100. 

The integrity processing unit 114 performs rebuilding of `bad` or missing encoded data slices. At a high level, the integrity processing unit 114 performs rebuilding by periodically attempting to retrieve/list encoded data slices, and/or slice names of the encoded data slices, from the DSN memory 116. For retrieved encoded slices, they are checked for errors due to data corruption, outdated version, etc. If a slice includes an error, it is flagged as a `bad` slice. For encoded data slices that were not received and/or not listed, they are flagged as missing slices. Bad and/or missing slices are subsequently rebuilt using other retrieved encoded data slices that are deemed to be good slices to produce rebuilt slices. The rebuilt slices are stored in the DSN memory 116

Brief Description:

is a schematic block diagram of an embodiment of a computing core in accordance with the present invention; 

Detailed Description:

Figure 2 is a schematic block diagram of an embodiment of a computing core 104 that includes a processing module 208, a memory controller 204, main memory 206, a video graphics processing unit 202, an input/output (TO) i/o controller 210, a peripheral component interconnect (PCI) interface 218, an IO interface module 212, at least one IO device interface module 214, a Read only memory (ROM) basic input output system (BIOS) 216, and one or more memory interface modules. The one or more memory interface module(s) includes one or more of a universal serial bus (USB) interface module 220, a host bus adapter (HBA) interface module 222, a network interface module 224, a flash interface module 226, a hard drive interface module 228, and a DSN interface module 230

The DSN interface module 230 functions to mimic a conventional operating system (OS) file system interface (e.g., network file system (NFS), flash file system (FFS), disk file system (DFS), file transfer protocol (FTP), web-based distributed authoring and versioning (WebDAV), etc.) and/or a block memory interface (e.g., small computer system interface (SCSI), internet small computer system interface (iSCSI), etc.). The DSN interface module 230 and/or the network interface module 224 may function as one or more of the interface 108 of Figure 1. Note that the IO device interface module 214 and/or the memory interface modules 220-230 may be collectively or individually referred to as IO ports


Parts List

100

distributed, storage network (DSN)

102

computing device

104

computing core

106

ds dispersed storage (DS) client module

108

interface

110

data

112

114

integrity processing unit

116

DSN memory

118

storage units

120

managing unit

202

video graphics processing unit

204

memory controller

206

main memory

208

processing module

210

io controller

212

IO interface module

214

IO device interface module

216

Read only memory (ROM) basic input output system (BIOS)

218

peripheral component interconnect (PCI) interface

220

universal serial bus (USB) interface module

222

host bus adapter (HBA) interface module

224

network interface module

226

flash interface module

228

hard drive interface module

230

DSN interface module


Terms/Definitions

common computing device

universal serial bus

interface 30

other portable device

access control list

decodes data

DS error encoding parameters

good slices

internet small computer system interface

different site

storage unit

times

amount

system

devices

network

key generation scheme

non-public vault

peripheral component interconnect

encryption/decryption scheme

vault access

part

missing slices

slices

memory devices

further note

encoded data slices

bad and/or missing slices

parameters

stores

operation

WebDAV

peripheral component interconnect (PCI) interface

file transfer protocol

unauthorized attempts

connection

indefinite period

handheld computer

host bus adapter

milwaukee

user profile information

one or more communication links

error

second pair

data corruption

fourth pair

role

computer

digital assistant

IO device interface module

network interface module

data slices

network maintenance

printer

network operations

per-data-amount billing information

home

first common site

wide area networks

devices 12

public internet systems

portable computing device

network interfaces

error encoded data

unit

storage

slice

storage unit failures

`bad` slice

chicago

one or more encryption keys

video graphics processing unit

other type

smart phone

DSN memory

main memory

cable set-top box

backup copies

video game controller

outdated version

data loss

slice names

tablet

computing devices

behalf

DS management services

one or more wireless and/or wire

computer server

memory interface modules

portion

file system interface

rebuilt slices

data

dispersed storage (DS) client module

network administration

social networking device

creates and stores user profile information

television set

type

per-access billing information

wireless connection

dispersed storage processing agent

LAN connection

eight storage units

user

satellite receiver

one or more local area networks

iSCSI

tolerant

vault information

at least one IO device interface module

performance level

FIGS

communication systems

vault creation

fax machine

user device

permissions

one or more memory interface modules

module

universal serial bus (USB) interface module

requests

high level

memory controller

need

public access and establishes

user devices

wired connection

flash file system

local memory

fixed computing device

office computing equipment

computing core

read only memory

input/output

devices and/or units

one or more non-public intranet systems

same site

particular user

example embodiment

secure manner

gaming device

first pair

decode data

storage error encodes

replacing

authentication credentials

embodiment

device and/or unit

managing creation

network file system

storage (DS) error encoding parameters

registry information

loss

managing unit

dispersed storage error encoding function

interface

significant number

fourth common site

information

Read only memory (ROM) basic input output system (BIOS)

small computer system interface

video game console

example

vault

communication links

disk file system

group

time

second common site

separate computing devices

communication link

public vault access

schematic block diagram

memory

failures

digital video player

IO interface module

creation

plurality

integrity

cell phone

security parameters

reference

vaults

yet another example

third common site

geographically different sites

storage units

specific example

authentication information

storage parameters

data encoding/decoding scheme

components

io controller

computing device

home entertainment equipment

overall namespace

number

integrity processing unit

block memory interface

third pair

host bus adapter (HBA) interface module

errors

conventional operating system

user data allocation requests

DSN interface module

web-based distributed authoring and versioning

IO ports

one or more memory interface module(s)

data storage parameters

other system level operation

common site

public vaults

virtual memory block

dispersed storage error encoding and decoding

device and/or unit activation status

dispersed storage error encode

user group

instance

processing module

digital music player

laptop computer

device and/or unit loading

flash interface module

combination

software and hardware

redundant

distributed, storage network (DSN)

hard drive interface module