Organization: Public

Partial Engine View


Drawings

Brief Description:

Figure 1 shows a partial engine view

Detailed Description:

Figure 1 depicts an example embodiment of a combustion chamber or cylinder of internal combustioninternal combustion engine 100. Internal combustion engine 100 may receive control parameters from a control system including controller 126 and input from a vehicle operator 130 via an input device 132. In this example, input device 132 includes an accelerator pedal and a pedal position sensor 134 for generating a proportional pedal position signal 102. Cylinder 182 of internal combustion engine 100 may include combustion chamber walls 136 with piston 138 positioned therein. Piston 138 may be coupled to crankshaft 140 so that reciprocating motion of the piston is translated into rotational motion of the crankshaft. Crankshaft 140 may be coupled to at least one drive wheel of the passenger vehicle via a transmission system. Further, a starter motor may be coupled to crankshaft 140 via a flywheel to enable a starting operation of internal combustion engine 100

Cylinder 182 can receive intake air via a series of intake air passage 1 142, intake air passage 2 144, and intake air passage 3 146. Intake air passage 3 146 can communicate with other cylinders of internal combustion engine 100 in addition to cylinder 182. In some embodiments, one or more of the intake passages may include a boosting device such as a turbocharger or a supercharger. For example, Figure 1 shows internal combustion engine 100 configured with a turbocharger including a compressor 174 arranged between intake air passage 1 142 and intake air passage 2 144, and an exhaust turbine 176 arranged along exhaust passage 148. Compressor 174 may be at least partially powered by exhaust turbine 176 via a shaft 180 where the boosting device is configured as a turbocharger. However, in other examples, such as where internal combustion engine 100 is provided with a supercharger, exhaust turbine 176 may be optionally omitted, where compressor 174 may be powered by mechanical input from a motor or the engine. A throttle 188 including a throttle plate 164 may be provided along an intake passage of the engine for varying the flow rate and/or pressure of intake air provided to the engine cylinders. For example, throttle 188 may be disposed downstream of compressor 174 as shown in Figure 1, or alternatively may be provided upstream of compressor 174

Exhaust passage 148 can receive exhaust gases from other cylinders of internal combustion engine 100 in addition to cylinder 182. Exhaust gas sensor(s) 128 is shown coupled to exhaust passage 148 upstream of emission control device 178. Exhaust gas sensor(s) 128 may be selected from among various suitable sensors for providing an indication of exhaust gas air/fuel ratio such as a linear oxygen sensor or UEGO (universal or wide-range exhaust gas oxygen), a two-state oxygen sensor or EGO 196 (as depicted), a HEGO (heated EGO), a NOx, HC, or CO sensor, for example. Emission control device 178 may be a three waycatalyst (TWC), NOx trap, various other emission control devices, or combinations thereof. 

Exhaust temperature may be estimated by one or more temperature sensors (not shown) located in exhaust passage 148. Alternatively, exhaust temperature may be inferred based on engine operating conditions such as speed, load, air-fuel ratio (AFR), spark retard, etc. Further, exhaust temperature may be computed by one or more exhaust gas sensor(s) 128. It may be appreciated that the exhaust gas temperature may alternatively be estimated by any combination of temperature estimation methods listed herein. 

Each cylinder of internal combustion engine 100 may include one or more intake valves and one or more exhaust valves. For example, cylinder 182 is shown including at least one intake valve 150 and at least one exhaust valve 156 located at an upper region of cylinder 182. In some embodiments, each cylinder of internal combustion engine 100, including cylinder 182, may include at least two intake poppet valves and at least two exhaust poppet valves located at an upper region of the cylinder

Intake valve 150 may be controlled by controller 126 by cam actuation via cam actuation system 1 152. Similarly, exhaust valve 156 may be controlled by controller 126 via cam actuation system 2 154. Cam actuation system 1 152 and cam actuation system 2 154 may each include one or more cams and may utilize one or more of cam profile switching (CPS), variable cam timing (VCT), variable valve timing (VVT) and/or variable valve lift (VVL) systems that may be operated by controller 126 to vary valve operation. The position of intake valve 150 and exhaust valve 156 may be determined by valve position sensor 1 158 and position sensor 2 160, respectively. In alternative embodiments, the intake and/or exhaust valve may be controlled by electric valve actuation. For example, cylinder 182 may alternatively include an intake valve controlled via electric valve actuation and an exhaust valve controlled via cam actuation including CPS and/or VCT systems. In still other embodiments, the intake and exhaust valves may be controlled by a common valve actuator or actuation system, or a variable valve timing actuator or actuation system

Cylinder 182 can have a compression ratio, which is the ratio of volumes when piston 138 is at bottom center to top center. Conventionally, the compression ratio is in the range of 9:1 to 10:1. However, in some examples where different fuels are used, the compression ratio may be increased. This may happen, for example, when higher octane fuels or fuels with higher latent enthalpy of vaporization are used. The compression ratio may also be increased if direct injection is used due to its effect on engine knock

In some embodiments, each cylinder of internal combustion engine 100 may include a spark plug 192 for initiating combustion. Ignition system 190 can provide an ignition spark to cylinder 182 via spark plug 192 in response to spark advance signal 162 from controller 126, under select operating modes. However, in some embodiments, spark plug 192 may be omitted, such as where internal combustion engine 100 may initiate combustion by auto-ignition or by injection of fuel as may be the case with some diesel engines

In some embodiments, each cylinder of internal combustion engine 100 may be configured with one or more fuel injectors for providing fuel thereto. As a non-limiting example, cylinder 182 is shown including one fuel injector 166. Fuel injector 166 is shown coupled directly to cylinder 182 for injecting fuel directly therein in proportion to the pulse width of signal FPW 170 received from controller 126 via electronic driver 168. In this manner, fuel injector 166 provides what is known as direct injection (hereafter also referred to as “DI”) of fuel into combustion cylinder 182. While Figure 1 shows fuel injector 166 as a side injector, it may also be located overhead of the piston, such as near the position of spark plug 192. Such a position may improve mixing and combustion when operating the engine with an alcohol-based fuel due to the lower volatility of some alcohol-based fuels. Alternatively, the injector may be located overhead and near the intake valve to improve mixing. Fuel may be delivered to fuel injector 166 from a high pressure fuel system 104 including fuel tanks, fuel pumps, and a fuel rail. Alternatively, fuel may be delivered by a single stage fuel pump at lower pressure, in which case the timing of the direct fuel injection may be more limited during the compression stroke than if a high pressure fuel system is used. Further, while not shown, the fuel tanks may have a pressure transducer providing a signal to controller 126. It will be appreciated that, in an alternate embodiment, fuel injector 166 may be a port injector providing fuel into the intake port upstream of cylinder 182

It will also be appreciated that while the depicted embodiment illustrates the engine being operated by injecting fuel via a single direct injector; in alternate embodiments, the engine may be operated by using two injectors (for example, a direct injector and a port injector) and varying a relative amount of injection from each injector

Fuel may be delivered by the injector to the cylinder during a single cycle of the cylinder. Further, the distribution and/or relative amount of fuel delivered from the injector may vary with operating conditions. Furthermore, for a single combustion event, multiple injections of the delivered fuel may be performed per cycle. The multiple injections may be performed during the compression stroke, intake stroke, or any appropriate combination thereof. Also, fuel may be injected during the cycle to adjust the air-to-injected fuel ratio (AFR) of the combustion. For example, fuel may be injected to provide a stoichiometric AFR. An AFR sensor may be included to provide an estimate of the in-cylinder AFR. In one example, the AFR sensor may be an exhaust gas sensor, such as exhaust gas sensor(s) 128. By measuring an amount of residual oxygen (for lean mixtures) or unburned hydrocarbons (for rich mixtures) in the exhaust gas, the sensor may determine the AFR. As such, the AFR may be provided as a lambda (.lamda.) Value, that is, as a ratio of actual AFR to stoichiometry for a given mixture. Thus, a lambda of 1.0 indicates a stoichiometric mixture, richer than stoichiometry mixtures may have a lambda value less than 1.0, and leaner than stoichiometry mixtures may have a lambda value greater than 1. 

As described above, Figure 1 shows only one cylinder of a multi-cylinder engine. As such each cylinder may similarly include its own set of intake/exhaust valves, fuel injector(s), spark plug, etc. 

Fuel tanks in fuel system 104 may hold fuel with different fuel qualities, such as different fuel compositions. These differences may include different alcohol content, different octane, different heat of vaporizations, different fuel blends, and/or combinations thereof etc. 

Internal combustion engine 100 may further include a knock sensor(s) 194 coupled to each cylinder 182 for identifying abnormal cylinder combustion events. In alternate embodiments, one or more knock sensor(s) 194 may be coupled to selected locations of the engine block. The knock sensor may be an accelerometer on the cylinder block, or an ionization sensor configured in the spark plug of each cylinder. The output of the knock sensor may be combined with the output of a crankshaft acceleration sensor to indicate an abnormal combustion event in the cylinder. In one example, based on the output of knock sensor(s) 194 in a one or more defined windows (e.g., crank angle timing windows), abnormal combustion due to one or more of knock and pre-ignition may be addressed. In particular, the severity of a mitigating action applied may be adjusted to address an occurrence of knock and pre-ignition, as well as to reduce the likelihood of further knock or pre-ignition events

Based on the knock sensor signal, such as a signal timing, amplitude, intensity, frequency, etc., and further based on the crankshaft acceleration signal, the controller may address abnormal cylinder combustion events. For example, the controller may identify and differentiate abnormal combustion due to knock and/or pre-ignition. As an example, pre-ignition may be indicated in response to knock sensor signals that are generated in an earlier window (e.g., before a cylinder spark event) while knock may be indicated in response to knock sensor signals that are generated in a later window (e.g., after the cylinder spark event). Further, pre-ignition may be indicated in response to knock sensor outputsensor output signals that are larger (e.g., higher than a first threshold), and/or less frequent while knock may be indicated in response to knock sensor outputsensor output signals that are smaller (e.g., higher than a second threshold, the second threshold lower than the first threshold) and/or more frequent. Additionally, pre-ignition may be distinguished from knock based on the engine operating conditions at the time of abnormal combustion detection. For example, high knock intensities at low engine speed may be indicative of low speed pre-ignition. In other embodiments, abnormal combustion due to knock and pre-ignition may be distinguished based on the output of the knock sensor in a single defined window. For example, pre-ignition may be indicated based on the output of the knock sensor being above a threshold in an earlier part of the window while knock is indicated based on the output of the knock sensor being higher than the threshold in a later part of the window. Furthermore, each window may have differing thresholds. For example, a first higher threshold may be applied in the first (earlier) pre-ignitionwindow while a second, lower threshold is applied in the second (later) knock window

Mitigating actions taken to address knock may differ from those taken by the controller to address pre-ignition. For example, knock may be addressed using spark retard and EGR while pre-ignition is addressed using cylinder enrichment, cylinder enleanment, engine load limiting, and/or delivery of cooled external EGR

As elaborated with reference to FIGS. 2-4, the inventors have recognized that instead of detecting and differentiating abnormal combustion events, and then adjusting a mitigating action based on the nature of the abnormal combustion, mitigating actions may be performed based on an output intensity of the knock sensor in the one or more windows. Specifically, a nature of the mitigating action applied may be selected based on the knock sensor output intensity in the one or more windows, and furthermore, a severity of the mitigating action(s) applied may be increased as the knock sensor output intensity in the defined window increases. The mitigating action may also be adjusted based on the engine speed at which the knock sensor output is detected. For example, knock sensor output generated in the first window may be addressed via cylinder enrichment, while knock sensor output generated in the second window may be addressed via spark timing retard. As another example, the cylinder enrichment may be increased as the knock sensor output intensity in the first window increases, while the spark timing may be retarded further from MBT as the knock sensor output intensity in the second window exceeds a threshold

Returning to Figure 1, controller 126 is shown as a microcomputer, including microprocessor unit 106, input/output ports 108, an electronic storage medium for executable programs and calibration values shown as read-only memory 110 in this particular example, random access memory 112, keep alive memory 114, and a data bus. Controller 126 may receive various signals from sensors coupled to internal combustion engine 100, in addition to those signals previously discussed, including measurement of inducted mass air flow 198 from mass air flow sensor 122; engine coolant temperature 172 from temperature sensor 116 coupled to cooling sleeve 118; a profile ignition pickup signal 200 from hall effect sensor 120 (or other type) coupled to crankshaft 140; throttle position 184 from a throttle position sensor; absolute manifold pressure signal 186 from sensor 124, cylinder AFR from exhaust gas sensor(s) 128, and abnormal combustion from knock sensor(s) 194 and a crankshaft acceleration sensor. Engine speed signal, RPM, may be generated by controller 126 from profile ignition pickup signal 200. Manifold pressure signal 186 from a manifold pressure sensor may be used to provide an indication of vacuum, or pressure, in the intake manifold

Storage medium read-only memory 110 can be programmed with computer readable data representing instructions executable by microprocessor unit 106 for performing the methods described below as well as other variants that are anticipated but not specifically listed. Example


Parts List

100

internal combustion engine

102

pedal position signal

104

fuel system

106

microprocessor unit

108

input/output ports

110

read-only memory

112

random access memory

114

keep alive memory

116

temperature sensor

118

cooling sleeve

120

hall effect sensor

122

mass air flow sensor

124

sensor

126

controller

128

exhaust gas sensor(s)

130

vehicle operator

132

input device

134

pedal position sensor

136

combustion chamber walls

138

piston

140

crankshaft

142

intake air passage 1

144

intake air passage 2

146

intake air passage 3

148

exhaust passage

150

intake valve

152

cam actuation system 1

154

cam actuation system 2

156

exhaust valve

158

position sensor 1

160

position sensor 2

162

spark advance signal

164

throttle plate

166

fuel injector

168

electronic driver

170

signal FPW

172

engine coolant temperature

174

compressor

176

exhaust turbine

178

emission control device

180

shaft

182

cylinder

184

throttle position

186

Manifold pressure signal

188

throttle

190

ignition system

192

spark plug

194

knock sensor(s)

196

two-state oxygen sensor or EGO

198

inducted mass air flow

200

profile ignition pickup signal


Terms/Definitions

direct injection

second window

residual oxygen

addition

single stage fuel pump

knock sensor(s)

input device

controller

engine speed

motor

spark timing retard

internal combustion engine

lambda value

direct injector

sensor output signals

such each cylinder

data

VCT systems

single cycle

crankshaft acceleration sensor

cylinder block

pedal position sensor

signals

intake stroke

low engine speed

measurement

shaft

knock sensor output

heated EGO

cylinder spark event

processor

value

position sensor 2

compressor

starter motor

defined window increases

input/output ports

occurrence

variable cam timing

chamber

engine cylinders

such a position

microprocessor unit

nature

intake and/or exhaust valve

delivered fuel

other variants

timing

signal

fuel tanks

different fuel qualities

side injector

various other emission control devices

select operating modes

distribution

pressure transducer

intake passages

exhaust valve

vaporizations

position

ignition system

given mixture

estimate

different heat

inventors

intake air passage 1

other embodiments

accelerator pedal

air-fuel ratio

combustion chamber or cylinder

passage

control system

case

cylinder

boosting device

vacuum

lamda

single direct injector

temperature estimation methods

direct fuel injection

signal FPW

upper region

HEGO

engine operating conditions

mixing and combustion

high knock intensities

spark retard and EGR

non-limiting example

electric valve actuation

lambda

lower volatility

series

variable valve lift

position sensor 1

amount

window

common valve actuator

diesel engines

cylinder AFR

hall effect sensor

random access memory

ignition spark

data bus

variable valve timing actuator

reciprocating motion

signal PIP

combinations

instructions

differing thresholds

output intensity

multi-cylinder engine

different fuels

higher latent enthalpy

intake and exhaust valves

air passage

engine speed signal

piston

further knock

fuel injector

downstream

still other embodiments

lean mixtures

crank angle timing windows

response

intake port upstream

Manifold pressure signal

control parameters

sensor

ionization sensor

combustion chamber walls

indication

manifold pressure sensor

abnormal combustion event

passenger vehicle

throttle

various signals

cooled external EGR

exhaust turbine

vehicle operator

flywheel

NOx trap

multiple injections

CO sensor

intake/exhaust valves

selected locations

fuel rail

volumes

injector

earlier window

exhaust temperature

different alcohol content, different octane

address knock

compression ratio

mitigating action(s)

alternate embodiment

intake passage

read-only memory

spark retard

flow rate

intake air passage 3

stoichiometric AFR

AFR sensor

pulse width

alternative embodiments

FIGS

linear oxygen sensor

systems

likelihood

accelerometer

speed

low speed pre-ignition

example embodiment

depicted embodiment

single defined window

different fuel blends

knock and pre-ignition

variable valve timing

calibration values

electronic driver

intake air passage 2

supercharger

lower pressure

valve position sensors

alcohol-based fuels

second threshold

intake valve

embodiments

examples

computer

other cylinders

engine block

spark advance signal

inducted mass air flow

cam profile

output

abnormal cylinder combustion events

two-state oxygen sensor or EGO

engine coolant temperature

throttle plate

emission control device

cooling sleeve

first window

temperature sensor

vaporization

high pressure fuel system

pressure

methods

mass air flow sensor

system

executable programs

combination

severity

sensor signals

pre-ignition events

abnormal combustion detection

air-to-injected fuel ratio

one fuel injector

first threshold

different fuel compositions

operating conditions

top center

alcohol-based fuel

alternate embodiments

universal or wide-range exhaust gas oxygen

exhaust gas temperature

two injectors

signal timing

compression stroke

various suitable sensors

valve operation

intake air

engine knock

absolute manifold pressure signal

injection

fuel pumps

turbocharger

microcomputer

stoichiometry mixtures

bottom center

relative amount

sensor output

first higher threshold

rotational motion

load

time

other examples

three way

appropriate combination

cylinder enleanment

reference

exhaust gas air/fuel ratio

ratio

particular example

single combustion event

cycle

profile ignition pickup signal

UEGO

cam actuation system 2

lower threshold

other type

exhaust gases

herein also “combustion chamber

abnormal combustion

throttle position sensor

manner

spark timing

threshold

in-cylinder AFR

actuation system

intake manifold

crankshaft

abnormal combustion events

engine load

transmission system

throttle position

crankshaft acceleration signal

knock sensor output intensity

stoichiometric mixture

catalyst

rich mixtures

mixing

combustion

proportion

unburned hydrocarbons

higher octane fuels or fuels

delivery

mechanical input

range

starting operation

exhaust gas sensor(s)

port injector

example

later window

pedal position signal

keep alive memory

exhaust passage

differences

electronic storage medium

knock sensor signal

combustion cylinder

fuel system

spark plug

auto-ignition

cam actuation system 1

pre-ignition

cylinder enrichment

Computing Device and Computing Environment


Drawings

Brief Description:

illustrates a general-purpose computing device 100 embodiment.

Detailed Description:

With reference to Figure 1, an exemplary system includes a general-purpose computing device 100, including a processing unit (CPU or processor 110) and a system bus 126 that couplesvarious system components including the system memory 112 such as read only memory (ROM 114) and random access memory (RAM 116) to the processor 110. The general-purpose computing device 100 can include a cache 108 of high speed memory connected directly with, in close proximity to, or integrated as part of the processor 110. The general-purpose computing device 100 copies data from the system memory 112 and/or the storage device 118 to the cache 108 for quick access by the processor 110. In this way, the cache 108 provides a performance boost that avoids processor 110delays while waiting for data. These and other modules can control or be configured to control the processor 110 to perform various actions. Other system memory 112 may be available for use as well. The system memory 112 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a general-purpose computing device 100 with more than one processor 110 or on a group or cluster of computing devices networked together to provide greater processing capability. The processor 110 can include any general purpose processor and a hardware module or software module, such as module 1 (mod1 120), module 2 (mod2 122), and module 3 (mod3 124) stored in storage device 118, configured to control the processor 110 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 110 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. 

The system bus 126 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 114 or the like, may provide the basic routine that helps to transfer information between elements within the general-purpose computing device 100, such as during start-up. The general-purpose computing device 100 further includes a storage device 118 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 118 can include software modulesmod1 120, mod2 122, and mod3 124 for controlling the processor 110. Other hardware or software modules are contemplated. The storage device 118 is connected to the system bus 126 by a drive interface. The drives and the associated computer readable storage media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the general-purpose computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a non-transitory computer-readable medium in connection with the necessary hardware components, such as the processor 110, system bus 126, output device 104, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the general-purpose computing device 100 is a small, handheld computing device, a desktop computer, or a computer server

Although the exemplary embodiment described herein employs a storage device 118, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAM 116), read only memory (ROM 114), a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Non-transitory computer-readable storage media expressly excludemedia such as energy, carrier signals, electromagnetic waves, and signals per se. 

To enable user interaction with the general-purpose computing device 100, an input device 102 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 104 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the general-purpose computing device 100. The communications interface 106 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. 

For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a “processor” or processor 110. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 110, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example the functions of one or more processors presented in Figure 1 may be provided by a single shared processor or multiple processors. (Use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM 114) for storing software performing the operations discussed below, and random access memory (RAM 116) for storing results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided. 

The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-useprogrammable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The general-purpose computing device 100 shown in Figure 1 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media. Such logical operations can be implemented as modules configured to control the processor 110 to perform particular functions according to the programming of the module. For example, Figure 1 illustrates three modulesmod1 120, mod2 122 and mod3 124 which are modules configured to control the processor 110. These modules may be stored on the storage device 118 and loaded into RAM 116 or system memory 112 at runtime or may be stored as would be known in the art in other computer-readable memory locations.

Brief Description:

illustrates a computing environment 200 in accordance with one embodiment.

Detailed Description:

Having disclosed some components of a computing system, the disclosure now turns to Figure 2, which illustrates a general purpose mobile computing environment 200. A communication network 206 connects the devices and applications hosted in the computing environment 200. In this computing environment 200, different devices may communicate with and send commands to each other in various ways. The application server 204, for example, may function as an intermediary between two or more user devices such as, user station 202, mobile device(s) a 214, and mobile device(s) b 210. The application server 204 may pass messages sent from one userdevice to another. For example, the application server 204 may receive a request from mobile device(s) a 214 (the “requesting device”) to locate another devicemobile device(s) b 210 (the “requested device”). In response to such a request (preferably after appropriate authentication and authorization steps have been taken to ensure the request is authorized by the user of the requested device), the application server 204 may send a request to the requested devicemobile device(s) b 210 and receive a response containing information relating to its location. The requested devicemobile device(s) b 210 may have obtained this location information based on signals it received from, for example, GPS satellites 216. Having received a response, the application server 204 may then send the information to the requesting mobile device(s) a 214. Alternatively, the application server 204 does not send a request to the requested devicemobile device(s) b 210 because it has recent location information relating to the requested devicemobile device(s) b 210 cached. In such an embodiment, the application server 204 may respond to a request by sending cached location information to the requesting mobile device(s) a 214 without communicating with the requested devicemobile device(s) b 210

The devicesuser station 202, mobile device(s) a 214, and mobile device(s) b 210 preferably have one or more location aware applications that may run on them. Of these applications, some may have the functionality to send requests to other user devices to enable a requesting user to locate a friend’sdevice. Upon receiving authorization to locate, a requesting device may then be able to send location requests to requested devices and receive responses containing the location of the requested device. Authorization is preferably managed at the server level, but may also be managed at the device level in addition or as an alternative

Referring back to Figure 2, the communication network 206 can be any type of network, including a local area network (“LAN”), such as an intranet, a wide area network (“WAN”), such as the internet, or any combination thereof. Further, the communication network 206 can be a public network, a private network, or a combination thereof. The communication network can also be implemented using any type or types of physical media, including wired communication paths and wireless communication paths associated with one or more service providers. Additionally, the communication network 206 can be configured to support the transmission of messages formatted using a variety of protocols

A device such as a user station 202 may also be configured to operate in the computing environment 200. The user station 202 can be any general-purpose computing device that can be configured to communicate with a web-enabled application, such as through a web browser. For example, the user station 202 can be a personal computing device such as a desktop or workstation, or a portable computing device, such as a laptop a smart phone, or a post-pc device. The user station 202 can include some or all of the features, components, and peripherals of general-purpose computing device 100 of Figure 1

User station 202 can further include a network connection to the communication network 206. The network connection can be implemented through a wired or wireless interface, and can support bi-directional communication between the user station 202 and one or more other computing devices over the communication network 206. Also, the user station 202 may include an interface application, such as a web browser or custom application, for communicating with a web-enabled application

An application server 204 can also be configured to operate in the computing environment 200. The application server 204 can be any computing device that can be configured to host one or more applications. For example, the application server 204 can be a server, a workstation, or a personal computer. In some implementations, the application server 204 can be configured as a collection of computing devices, e.g., servers, sited in one or more locations. The application server 204 can include some or all of the features, components, and peripherals of general-purpose computing device 100 of Figure 1

The application server 204 can also include a network connection to the communication network 206. The network connection can be implemented through a wired or wireless interface, and can support bi-directional communication between the application server 204 and one or more other computing devices over the communication network 206. Further, the application server 204 can be configured to host one or more applications. For example, the application server 204 can be configured to host a remote management application that facilitates communication with one or more mobile devices connected with the communication network 206. The mobile device(s) a 214, mobile device(s) b 210 and the application server 204 can operate within a remote management framework to execute remote management functions. The application server 204 can be configured to host a notification service application configured to support bi-directional communication over the communication network 206 between multiple communication devices included in the computing environment 200. For example, the notification service application can permit a variety of messages to be transmitted and received by multiple computing devices.

In some implementations, the notification service can include a defined namespace, in which a unique command collection topic can be created for each subscribing mobile device. A unique identifier can be used to associate a subscribing mobile device with the corresponding command collection topic, such as an assigned number or address. The unique identifier also can be embedded in a Uniform Resource Identifier (URI) that is associated with a subscribed command collection topic. Further, one or more command nodes can be created below a command collection topic, such that each command node corresponds to a particular remote command type. For example, a command collection topic can include a separate command node for a location command

Through the use of separate command nodes, multiple commands can be transmitted to one or more mobile devices substantially simultaneously. In some implementations, if multiple commands are received in a command collection topic, server time stamps can be compared to determine an order of execution

Through the notification service, a publisher, such as a remote management application, can publish a remote command message to a command collection topic that is associated with a particular mobile device. When a remote command message is published to the command collection topic, a notification message can be transmitted to the one or more subscribing mobile devices. The mobile device can then access the subscribed topic and retrieve one or more published messages. This communication between the publisher and the mobile device can be decoupled. Further, the remote command message can be published to the appropriate command node of the command collection topic. Additionally, a mobile device receiving a remote command message can publish a response to a result topic hosted by a notification service. A publisher such as a remote management application, can subscribe to the result topic and can receive any published response messages

Further, the computing environment 200 can include one or more mobile devices, such as mobile device(s) a 214 and mobile device(s) b 210. These mobile devices are preferably smart phones such as an Apple iPhone.RTM. or post-pc device such as an Apple iPad.RTM.. Each of the mobile devices included in the computing environment 200 can include a network interface configured to establish a connection to the communication network 206. For example, mobile device(s) a 214 can establish a cellular (e.g., GSM, EDGE, 3G, or 4G) network connection that provides data access to the communication network 206. Such a connection may be facilitated by one or more cellular towers 208 located within the range of the mobile device(s) a 214 and mobile device(s) b 210 and connected to the communication network 206. Further, mobile device(s) b 210 can establish an IEEE 802.11 (i.e., WiFi or WLAN) network connection to the communication network 206. Such a connection may be facilitated by one or more wireless network router(s) 212 located within the range of the mobile device(s) a 214 and mobile device(s) b 210 and connected to the communication network 206. Also, either one of these mobile device(s) a 214, mobile device(s) b 210 or an additional device may connect to the communication network 206 through the IEEE 802.16 (i.e., wireless broadband or WiBB) standard. Again, the mobile device(s) a 214, mobile device(s) b 210 may employ the assistance of a cellular towers 208 or wireless network router(s) 212 to connect to the communication network 206

Each of the mobile device(s) a 214 and mobile device(s) b 210 also can be configured to communicate with the notification service application hosted by the application server 204 to publish and receive messages. Further, each of the mobile device(s) a 214 and mobile device(s) b 210 can be configured to execute a remote management application or a remote management function responsive to a remote command received through the notification service application. In some embodiments, the remote management application can be integrated with the operating system of the mobile device

A mobile device can execute a remote command to perform one or more associated functions. For example the remote commands can include locate commands, notification commands, and message commands. A message command can be used to present a text-based message on the display of a mobile device. A locate command can be used to cause a mobile device to transmit a message indicating its location at the time the locate command is executed. The locate command may also command the mobile device to use certain resources, such as an embedded GPS system, to determine its location

Additionally, each of the mobile device(s) a 214 and mobile device(s) b 210 can include an input interface, through which one or more inputs can be received. For example, the input interface can include one or more of a keyboard, a mouse, as joystick, a trackball, a touch pad, a keypad, a touch screen, a scroll wheel, general and special purpose buttons, a stylus, a video camera, and a microphone. Each of the mobile device(s) a 214 and mobile device(s) b 210 can also include an output interface through which output can be presented, including one or more displays, one or more speakers, and a haptic interface. Further, a location interface, such as a Global Positioning System (GPS) processor, also can be included in one or more of the mobile device(s) a 214 and mobile device(s) b 210 to receive and process signals sent from GPS satellites 216 for obtaining location information, e.g., an indication of current location. In some implementations, general or special purpose processors included in one or more of the mobile device(s) a 214 and mobile device(s) b 210 can be configured to perform location estimation, such as through base station triangulation or through recognizing stationary geographic objects through a video interface

Having disclosed some basic system components and concepts, the disclosure now turns to exemplary method embodiments 300a and 300b shown in FIGS. 3a and 3b respectively. For the sake of clarity, the methods are discussed in terms of a general-purpose computing device 100 as shown in Figure 1 configured to practice the methods and operating environment shown in Figure 2. The steps outlined herein are exemplary and can be implemented in any combination thereof, including combinations that exclude, add, or modify certain steps


Parts List

100

general-purpose computing device

102

input device

104

output device

106

communications interface

108

cache

110

processor

112

system memory

114

ROM

116

RAM

118

storage device

120

mod1

122

mod2

124

mod3

126

system bus

200

computing environment

202

user station

204

application server

206

communication network

208

cellular towers

210

mobile device(s) b

212

wireless network router(s)

214

mobile device(s) a

216

GPS satellites


Terms/Definitions

hardware module

physical media

hard disk drive

joystick

Apple iPad

delays

three modules Mod

friend’s

time

host one or more applications

basic input/output

such a connection

skill

such logical operations

speech

requesting user

haptic interface

response

location information

sake

server

optical disk drive

wired communication paths

portable computing device

CPU or processor)

small, handheld computing device

mouse

carrier signals

network interface

read only memory (ROM)

various actions

locate command

particular mobile device

nodes

example

software instructions

general-purpose computing device

cellular towers

messages

microphone

mod2

alternative

notification message

reference

module

sequence

connection

server time stamps

procedures

touch screen

output mechanisms

elements

processing unit

protocols

particular hardware arrangement

other system memory

equivalent

particular remote command type

special purpose buttons

improved hardware

remote management function responsive

wired or wireless interface

random access memories (RAMs)

input interface

drives

very large scale integration

keypad

requested device

internet

desktop computer

software and hardware

command collection topic

special-purpose processor

storage devices

other computer-readable memory locations

flash memory cards

such a request

process signals

such an embodiment

drive interface

appropriate variations

GPS satellites

wide area network

microprocessor

unique command collection topic

term “processor”

laptop

quick access

operations

mod1

execution

responses

blocks

number

instances

mobile device(s) a

location requests

devices and applications

memory

random access memory (RAM)

“requesting device”

message command

computing device

computer server

only memory (ROM)

steps

touch pad

bi-directional communication

preferably smart phones

restriction

requested devices

touch-sensitive screen

individual functional blocks

EDGE

recent location information

cartridges

remote command

personal computer

multiple cores or processors

ROM

gesture or graphical input

web browser

tape drive

data access

non-transitory computer-readable medium

public network

necessary hardware components

signal

FIGS

high speed memory

location

result topic

other user devices

Apple iPhone.RTM

base station triangulation

combinations

information

remote management framework

various embodiments

basic routine

interface application

close proximity

published response messages

group or cluster

server level

system memory

computing environment

custom VLSI circuitry

basic features

system

location estimation

display

computer

firmware arrangements

post-pc device

input mechanisms

web browser or custom application

remote commands

network connection

digital versatile disks

special purpose processors

recited systems

other modules

notification service

dedicated hardware

order

cache

digital signal processor

bus structures

intermediary

software component

wireless communication paths

cable

performance boost

multiple different types

multiple types

disclosure

hardware module or software module

“requested device”

more than one processor

subscribed command collection topic

assistance

signals

couples

programmable circuits

wireless broadband or WiBB

programming

message

read-only memory (ROM)

components

results

keyboard

functional blocks

servers

energy

memory controller

separate command nodes

interconnected machine modules

subscribed topic

various system components

type or types

storage media

illustrative system embodiment

remote command message

stylus

non-transitory computer-readable storage media

nonvolatile storage

terms

stationary geographic objects

program engines

general purpose processor

certain steps

other data

magnetic cassettes

authorization

multiple processors

WiFi or WLAN

commands

assigned number or address

Global Positioning System

local area network

bit stream

concepts

indication

wireless network router(s)

data

multiple commands

personal computing device

notification commands

software modules

programmable circuit

part

device level

remote management application

trackball

separate command node

certain resources

locate commands

explanation

general use computer

applications

output

greater processing capability

different devices

web-enabled application

workstation

exclude

range

application server

subscribing mobile device

magnetic disk drive

message commands

program modules

addition

transmission

methods

communication

remote management functions

combination

mod3

BIOS

multimodal systems

communications interface

video interface

requests

appropriate command node

hardware capable

recited methods

location interface

features

additional device

authorization steps

two or more user devices

scroll wheel

defined namespace

intranet

user interaction

smart phone

functions

storage device

embodiments

variety

clarity

other types

embedded GPS system

functionality

specific-use

“processor” or processor

device

text-based message

computer implemented steps

data structures

motion

different performance characteristics

user station

circuit

media

computing devices

operating system

command node

corresponding command collection topic

peripherals

computing system

desktop or workstation

particular function

its location

single shared processor

multi-core processor

publisher

runtime

memory bus or memory controller

unique identifier

user

Uniform Resource Identifier

software

illustrative embodiments

output interface

network

function

requesting device

instructions

location command

collection

multiple communication devices

appropriate authentication

communication network

particular functions

recited non-transitory computer-readable storage media

video camera

general purpose DSP circuit

various ways

associated computer

electromagnetic waves

private network

request

modules

operating environment

cached location information

current location

implementations

other hardware

method embodiments

basic components

several types

cell tower

system bus

processor

mobile device(s) b

bus architectures

user input and system output

output device

devices

basic system components

peripheral bus

notification service application

input device

multiple computing devices

local bus

start-up

type

Computer System Architecture


Drawings

Brief Description:

depicts an illustrative computer system architecture that may be used in accordance with one or more illustrative aspects described herein. 

Detailed Description:

Figure 1 illustrates one example of a system architecture and data processing device that may be used to implement one or more illustrative aspects described herein in a standalone and/or networked environment. Various network nodesdata server 110, web server 106, computer 104, and laptop 102 may be interconnected via a wide area network 108 (WAN), such as the internet. Other networks may also or alternatively be used, including private intranets, corporate networks, LANs, metropolitan area networks (MANs) wireless networks, personal networks (PANs), and the like. Network 108 is for illustration purposes and may be replaced with fewer or additional computer networks. A local area network (LAN) may have one or more of any known LAN topology and may use one or more of a variety of different protocols, such as ethernet. Devicesdata server 110, web server 106, computer 104, laptop 102 and other devices (not shown) may be connected to one or more of the networks via twisted pair wires, coaxial cable, fiber optics, radio waves or other communication media

The term “network” as used herein and depicted in the drawings refers not only to systems in which remote storage devices are coupled together via one or more communication paths, but also to stand-alone devices that may be coupled, from time to time, to such systems that have storage capability. Consequently, the term “network” includes not only a “physical network” but also a “content network,” which is comprised of the data–attributable to a single entity–which resides across all physical networks

The components may include data server 110, web server 106, and client computer 104, laptop 102. Data server 110 provides overall access, control and administration of databases and control software for performing one or more illustrative aspects described herein. Data serverdata server 110 may be connected to web server 106 through which users interact with and obtain data as requested. Alternatively, data server 110 may act as a web server itself and be directly connected to the internet. Data server 110 may be connected to web server 106 through the network 108 (e.g., the internet), via direct or indirect connection, or via some other network. Users may interact with the data server 110 using remote computer 104, laptop 102, e.g., using a web browser to connect to the data server 110 via one or more externally exposed web sites hosted by web server 106. Client computer 104, laptop 102 may be used in concert with data server 110 to access data stored therein, or may be used for other purposes. For example, from client computer 104, a user may access web server 106 using an internet browser, as is known in the art, or by executing a software application that communicates with web server 106 and/or data server 110 over a computer network (such as the internet). 

servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines. Figure 1 illustrates just one example of a network architecture that may be used, and those of skill in the art will appreciate that the specific network architecture and data processing devices used may vary, and are secondary to the functionality that they provide, as further described herein. For example, services provided by web server 106 and data server 110 may be combined on a single server

Each componentdata server 110, web server 106, computer 104, laptop 102 may be any type of known computer, server, or data processing device. Data server 110, e.g., may include a processor 112 controlling overall operation of the data server 110. Data server 110 may further include RAM 116, ROM 118, network interface 114, input/output interfaces 120 (e.g., keyboard, mouse, display, printer, etc.), and memory 122. Input/output interfaces 120 may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. Memory 122 may further store operating system software 124 for controlling overall operation of the data server 110, control logic 126 for instructing data server 110 to perform aspects described herein, and other application software 128 providing secondary, support, and/or other functionality which may or may not be used in conjunction with aspects described herein. The control logic may also be referred to herein as the data server softwarecontrol logic 126. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, made manually by a user providing input into the system, and/or a combination of automatic processing based on user input (e.g., queries, data updates, etc.). 

memory 122 may also store data used in performance of one or more aspects described herein, including a first database 132 and a second database 130. In some embodiments, the first database may include the second database (e.g., as a separate table, report, etc.). That is, the information can be stored in a single database, or separated into different logical, virtual, or physical databases, depending on system design. Web server 106, computer 104, laptop 102 may have similar or different architecture as described with respect to data server 110. Those of skill in the art will appreciate that the functionality of data server 110 (or web server 106, computer 104, laptop 102) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QoS), etc. 

One or more aspects may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) HTML or XML. The computer executable instructions may be stored on a computer readable medium such as a nonvolatile storage device. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof. In addition, various transmission (non-storage) media representing data or events as described herein may be transferred between a source and a destination in the form of electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, and/or wireless transmission media (e.g., air and/or space). various aspects described herein may be embodied as a method, a data processing system, or a computer program product. Therefore, various functionalities may be embodied in whole or in part in software, firmware and/or hardware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects described herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. 


Parts List

102

laptop

104

computer

106

web server

108

network

110

data server

112

processor

114

network interface

116

RAM

118

ROM

120

input/output interfaces

122

memory

124

operating system software

126

control logic

128

other application software

130

second database

132

first database


Terms/Definitions

modules

databases and control software

nonvolatile storage device

method

servers and applications

server

data server software

user input

time

additional computer networks

second database

physical networks

devices

separate physical machines

hardware

software application

similar or different architecture

networks

various functionalities

overall operation

memory

system design

illustrative computer system architecture

client computers

data processing device

particular tasks

form

electromagnetic waves

radio waves

data processing system

scope

non-storage

routines

aspects

user

information

gate arrays

computer network

aka, remote desktop

firmware and/or hardware or hardware equivalents

users

scripting language

cloud-based environments

rules

source

various network nodes

computer

suitable computer

data server

web server

processor

geographic location

various transmission

remote-access

separate virtual or logical addresses

storage capability

reading

functionality

laptop

illustration purposes

combination

source code programming language

optical fibers

networked environment

one or more communication paths

computer-executable instructions

MANs

hard disks

overall access

input/output interfaces

magnetic storage devices

system architecture and data processing device

indirect connection

internet

instructing data server

internet browser

known LAN topology

specific network architecture

various aspects

ROM

same physical machines

other functionality

CD-ROMs, optical storage devices

conjunction

instructions

remote computers

single entity–which resides

such data structures

network

quality

program modules

physical databases

control logic

computer software

user access level

one or more program modules

LANs

first database

one or more illustrative aspects

one or more aspects

variety

mouse

execution

control and administration

single database

automatic processing

storage media

operations or decisions

particular data structures

device

logic

metropolitan area networks

other purposes

data processing devices

particular abstract data types

skill

client device

components

queries

FPGA

RAM

input

different system environments

secondary, support

writing

PANs

network architecture

network interface

other networks

keyboard

services

operating system software

systems

such systems

addition

programs

just one example

embodiments

processing load

personal networks

corporate networks

separate table

part

component

metal wires

wide area network (WAN)

Computing Architecture

twisted pair wires

example

fiber optics

access data

one or more externally exposed web sites

medium

interface units and drives

single server

printer

multiple computers

drawings

integrated circuits, field

ethernet

other communication media

local area network

web browser

HTML or XML

standalone

other application software

wireless transmission media

other device

computer program product

private intranets

performance

one or more computers

data

software

stand-alone devices

signal-conducting media

objects

destination

computer-usable data

data updates

system

air and/or space

multiple data processing devices

service

data structures

other devices

type

different protocols

remote storage devices

coaxial cable

concert

transactions

Computer Implemented Method Primary Secondary Channels


Drawings

Brief Description:

Figure 1 illustrates one embodiment of a computer-implemented method for selecting between primary and secondary communication channels

Detailed Description:

At 102, a primary P2P communication channel is selected. As mentioned above, the primary channel may be selected based on a predefined prioritization scheme. For example, certain communication channel types may be prioritized ahead of other communication channel types. Channels may also be prioritized based on variables such as bandwidth, cost for usage, and/or reliability

At 104, backup P2P communication channels are established. In one embodiment, this is accomplished by sharing connection data between all of the mobile devices over the primary communication channel. At 106, the backup channels are maintained. In one embodiment, this involves transmitting data periodically over the secondary communication channels (e.g., in the form of periodic heartbeat packets). 

At 108, if the primary P2P channel fails (e.g., because the communication link of a particular mobile device went down or the mobile device moved out of range of the communication link), then at 110, the mobile devices promote the highest priority backup channel to the primary channel. In one embodiment, this involves the mobile device with the failed link transmitting a notification of its link failure to the other devices over the secondary channel. Finally, at 112, the backup channel is made the primary channel and the process reverts to 104 (in which any additional backup channels are discovered and added to the prioritization scheme). 


Parts List

100

item

102

block

104

block

106

block

108

decision block

110

block

112

block


Terms/Definitions

other communication channel types

highest priority backup channel

bandwidth

its link failure

usage

primary P2P channel fails

one embodiment

notification

prioritization scheme

primary and secondary communication channels

primary communication channel

other devices

range

certain communication channel types

data

connection data

predefined prioritization scheme

secondary communication channels

additional backup channels

secondary channel

failed link

mobile device

primary P2P communication channel

process

variables

cost

backup P2P communication channels

primary channel

backup channels

mobile devices

form

communication link

reliability

figure

periodic heartbeat packets

backup channel

channels

particular mobile device

computer-implemented method

example

Computer Architecture and Correlithm Object Processing


Drawings

Brief Description:

Figure 1 is a schematic view of an embodiment of a special purpose computerimplementingcorrelithm objects in an n-dimensional space

Detailed Description:

Figure 1 is a schematic view of an embodiment of a user device 112 implementingcorrelithm objects 104 in an n-dimensional space 102. Examples of user devices 112 include, but are not limited to, desktop computers, mobile phones, tablet computers, laptop computers, or other special purpose computer platforms. The user device 112 is configured to implement or emulate a correlithm object processing system that uses categorical numbers to represent data samples as correlithm objects 104 in a high-dimensional space 102, for example a high-dimensional binary cube. Additional information about the correlithm object processing system is described in Figure 3. Additional information about configuring the user device 112 to implement or emulate a correlithm object processing system is described in Figure 5

 

Conventional computers rely on the numerical order of ordinal binary integers representing data to perform various operations such as counting, sorting, indexing, and mathematical calculations. Even when performing operations that involve other number systems (e.g. floating point), conventional computers still resort to using ordinal binary integers to perform any operations. Ordinal based number systems only provide information about the sequence order of the numbers themselves based on their numeric values. Ordinal numbers do not provide any information about any other types of relationships for the data being represented by the numeric values, such as similarity. For example, when a conventional computer uses ordinal numbers to represent data samples (e.g. images or audio signals), different data samples are represented by different numeric values. The different numeric values do not provide any information about how similar or dissimilar one data sample is from another. In other words, conventional computers are only able to make binary comparisons of data samples which only results in determining whether the data samplesmatch or do not match. Unless there is an exact match in ordinal number values, conventional systems are unable to tell if a data samplematches or is similar to any other data samples. As a result, conventional computers are unable to use ordinal numbers by themselves for determining similarity between different data samples, and instead these computers rely on complex signal processing techniques. Determining whether a data samplematches or is similar to other data samples is not a trivial task and poses several technical challenges for conventional computers. These technical challengesresult in complex processes that consume processing power which reduces the speed and performance of the system

 

In contrast to conventional systems, the user device 112 operates as a special purpose machine for implementing or emulating a correlithm object processing system. Implementing or emulating a correlithm object processing system improves the operation of the user device 112 by enabling the user device 112 to perform non-binary comparisons (i.e. match or no match) between different data samples. This enables the user device 112 to quantify a degree of similarity between different data samples. This increases the flexibility of the user device 112 to work with data samples having different data types and/or formats, and also increases the speed and performance of the user device 112 when performing operations using data samples. These improvements and other benefits to the user device 112 are described in more detail below and throughout the disclosure

 

For example, the user device 112 employs the correlithm object processing system to allow the user device 112 to compare data samples even when the input data sample does not exactly match any known or previously stored input values. Implementing a correlithm object processing system fundamentally changes the user device 112 and the traditional data processing paradigm. Implementing the correlithm object processing system improves the operation of the user device 112 by enabling the user device 112 to perform non-binary comparisons of data samples. In other words, the user device 112 is able to determine how similar the data samples are to each other even when the data samples are not exact matches. In addition, the user device 112 is able to quantify how similar data samples are to one another. The ability to determine how similar data samples are to each others is unique and distinct from conventional computers that can only perform binary comparisons to identify exact matches

 

The user device’s 112 ability to perform non-binary comparisons of data samples also fundamentally changes traditional data searching paradigms. For example, conventional search engines rely on finding exact matches or exact partial matches of search tokens to identify related data samples. For instance, conventional text-based search engine are limited to finding related data samples that have text that exactly matchesother data samples. These search engines only provide a binary result that identifies whether or not an exact match was found based on the search token. Implementing the correlithm object processing system improves the operation of the user device 112 by enabling the user device 112 to identify related data samples based on how similar the search token is to other data sample. These improvementsresult in increased flexibility and faster search time when using a correlithm object processing system. The ability to identify similarities between data samples expands the capabilities of a search engine to include data samples that may not have an exact match with a search token but are still related and similar in some aspects. The user device 112 is also able to quantify how similar data samples are to each other based on characteristics besides exact matches to the search token. Implementing the correlithm object processing system involves operating the user device 112 in an unconventional manner to achieve these technological improvements as well as other benefits described below for the user device 112. 

 

Computing devices typically rely on the ability to compare data sets (e.g. data samples) to one another for processing. For example, in security or authentication applications a computing device is configured to compare an input of an unknown person to a data set of known people (or biometric information associated with these people). The problems associated with comparing data sets and identifying matches based on the comparison are problems necessarily rooted in computer technologies. As described above, conventional systems are limited to a binary comparison that can only determine whether an exact match is found. As an example, an input data sample that is an image of a person may have different lighting conditions than previously stored images. In this example, different lighting conditions can make images of the same person appear different from each other. Conventional computers are unable to distinguish between two images of the same person with different lighting conditions and two images of two different people without complicated signal processing. In both of these cases, conventional computers can only determine that the images are different. This is because conventional computers rely on manipulating ordinal numbers for processing

 

In contrast, the user device 112 uses an unconventional configuration that uses correlithm objects to represent data samples. Using correlithm objects to represent data samples fundamentally changes the operation of the user device 112 and how the device views data samples. By implementing a correlithm object processing system, the user device 112 can determine the distance between the data samples and other known data samples to determine whether the input data sample matches or is similar to the other known data samples, as explained in detail below. Unlike the conventional computers described in the previous example, the user device 112 is able to distinguish between two images of the same person with different lighting conditions and two images of two different people by using correlithm objects 104. Correlithm objects allow the user device 112 to determine whether there are any similarities between data samples, such as between two images that are different from each other in some respects but similar in other respects. For example, the user device 112 is able to determine that despite different lighting conditions, the same person is present in both images

 

In addition, the user device 112 is able to determine a degree of similarity that quantifies how similar different data samples are to one another. Implementing a correlithm object processing system in the user device 112 improves the operation of the user device 112 when comparing data sets and identifying matches by allowing the user device 112 to perform non-binary comparisons between data sets and to quantify the similarity between different data samples. In addition, using a correlithm object processing system results in increased flexibility and faster search times when comparing data samples or data sets. Thus, implementing a correlithm object processing system in the user device 112 provides a technical solution to a problem necessarily rooted in computer technologies

 

The ability to implement a correlithm object processing system provides a technical advantage by allowing the system to identify and compare data samples regardless of whether an exact match has been previous observed or stored. In other words, using the correlithm object processing system the user device 112 is able to identify similar data samples to an input data sample in the absence of an exact match. This functionality is unique and distinct from conventional computers that can only identify data samples with exact matches

 

Examples of data samples include, but are not limited to, images, files, text, audio signals, biometric signals, electric signals, or any other suitable type of data. A correlithm object 104 is a point in the n-dimensional space 102, sometimes called a “n-space.” The value of represents the number of dimensions of the space. For example, an n-dimensional space 102 may be a 3-dimensional space, a 50-dimensional space, a 112-dimensional space, or any other suitable dimension space. The number of dimensions depends on its ability to support certain statistical tests, such as the distances between pairs of randomly chosen points in the space approximating a normal distribution. In some embodiments, increasing the number of dimensions in the n-dimensional space 102 modifies the statistical properties of the system to provide improved results. Increasing the number of dimensions increases the probability that a correlithm object 104 is similar to other adjacent correlithm objects 104. In other words, increasing the number of dimensions increases the correlation between how close a pair of correlithm objects 104 are to each other and how similar the correlithm objects 104 are to each other. 

 

Correlithm objectprocessing systems use new types of data structures called correlithm objects 104 that improve the way a device operates, for example, by enabling the device to perform non-binary data set comparisons and to quantify the similarity between different data samples. Correlithm objects 104 are data structures designed to improve the way a device stores, retrieves, and compares data samples in memory. Unlike conventional data structures, correlithm objects 104 are data structures where objects can be expressed in a high-dimensional space such that distance 106 between points in the space represent the similarity between different objects or data samples. In other words, the distance 106 between a pair of correlithm objects 104 in the n-dimensional space 102 indicates how similar the correlithm objects 104 are from each other and the data samples they represent. Correlithm objects 104 that are close to each other are more similar to each other than correlithm objects 104 that are further apart from each other. For example, in a facial recognition application, correlithm objects 104 used to represent images of different types of glasses may be relatively close to each other compared to correlithm objects 104 used to represent images of other features such as facial hair. An exact match between two data samples occurs when their corresponding correlithm objects 104 are the same or have no distance between them. When two data samples are not exact matches but are similar, the distance between their correlithm objects 104 can be used to indicate their similarities. In other words, the distance 106 between correlithm objects 104 can be used to identify both data samples that exactly match each other as well as data samples that do not match but are similar. This feature is unique to a correlithm processing system and is unlike conventional computers that are unable to detect when data samples are different but similar in some aspects

 

Correlithm objects 104 also provide a data structure that is independent of the data type and format of the data samples they represent. Correlithm objects 104 allow data samples to be directly compared regardless of their original data type and/or format. In some instances, comparing data samples as correlithm objects 104 is computationally more efficient and faster than comparing data samples in their original format. For example, comparing images using conventional data structures involves significant amounts of image processing which is time consuming and consumes processing resources. Thus, using correlithm objects 104 to represent data samples provides increased flexibility and improved performance compared to using other conventional data structures

 

In one embodiment, correlithm objects 104 may be represented using categorical binary strings. The number of bits used to represent the correlithm object 104 corresponds with the number of dimensions of the n-dimensional space 102 where the correlithm object 102 is located. For example, each correlithm object 104 may be uniquely identified using a 64-bit string in a 64-dimensional space 102. As another example, each correlithm object 104 may be uniquely identified using a 10-bit string in a 10-dimensional space 102. In other examples, correlithm objects 104 can be identified using any other suitable number of bits in a string that corresponds with the number of dimensions in the n-dimensional space 102

 

In this configuration, the distance 106 between two correlithm objectscorrelithm objects 104 can be determined based on the differences between the bits of the two correlithm objectscorrelithm objects 104. In other words, the distance 106 between two correlithm objects can be determined based on how many individual bits differ between the correlithm objects 104. The distance 106 between two correlithm objectscorrelithm objects 104 can be computed using hamming distance or any other suitable technique

 

As an example using a 10-dimensional space 102, a first correlithm object 104 is represented by a first 10-bit string (1121011011) and a second correlithm object 104 is represented by a second 10-bit string (1120011011). The hamming distance corresponds with the number of bits that differ between the first correlithm object 104 and the second correlithm object 104. In other words, the hamming distance between the first correlithm object 104 and the second correlithm object 104 can be computed as follows

 

##EQU00001## In this example, the hamming distance is equal to one because only one bit differs between the first correlithm object 104 and the second correlithm object. As another example, a third correlithm object 104 is represented by a third 10-bit string (0110112112). In this example, the hamming distance between the first correlithm object 104 and the third correlithm object 104 can be computed as follows

 

##EQU00002## The hamming distance is equal to ten because all of the bits are different between the first correlithm object 104 and the third correlithm object 104. In the previous example, a hamming distance equal to one indicates that the first correlithm object 104 and the second correlithm object 104 are close to each other in the n-dimensional space 102, which means they are similar to each other. In the second example, a hamming distance equal to ten indicates that the first correlithm object 104 and the third correlithm object 104 are further from each other in the n-dimensional space 102 and are less similar to each other than the first correlithm object 104 and the second correlithm object 104. In other words, the similarity between a pair of correlithm objects can be readily determined based on the distance between the pair correlithm objects

 

As another example, the distance between a pair of correlithm objects 104 can be determined by performing an XOR operation between the pair of correlithm objects 104 and counting the number of logical high values in the binary string. The number of logical high values indicates the number of bits that are different between the pair of correlithm objects 104 which also corresponds with the hamming distance between the pair of correlithm objects 104

 

In another embodiment, the distance 106 between two correlithm objectscorrelithm objects 104 can be determined using a minkowski distance such as the Euclidean or “straight-line” distance between the correlithm objects 104. For example, the distance 106 between a pair of correlithm objects 104 may be determined by calculating the square root of the sum of squares of the coordinate difference in each dimension

 

The user device 112 is configured to implement or emulate a correlithm object processing system that comprises one or more sensors108, nodes 304, and/or actors 110 in order to convert data samples between real world values or representations and to correlithm objects 104 in a correlithm object domain. Sensors108 are generally configured to convert real world data samples to the correlithm object domain. Nodes 304 are generally configured to process or perform various operations on correlithm objects in the correlithm object domain. Actors 110 are generally configured to convert correlithm objects 104 into real world values or representations. Additional information about sensors108, nodes 304, and actors 110 is described in Figure 3

 

Performing operations using correlithm objects 104 in a correlithm object domain allows the user device 112 to identify relationships between data samples that cannot be identified using conventional data processing systems. For example, in the correlithm object domain, the user device 112 is able to identify not only data samples that exactly match an input data sample, but also other data samples that have similar characteristics or features as the input data samples. Conventional computers are unable to identify these types of relationships readily. Using correlithm objects 104 improves the operation of the user device 112 by enabling the user device 112 to efficiently processdata samples and identify relationships between data samples without relying on signal processing techniques that require a significant amount of processing resources. These benefits allow the user device 112 to operate more efficiently than conventional computers by reducing the amount of processing power and resources that are needed to perform various operations

Brief Description:

Figure 2 is a perspective view of an embodiment of a mapping between correlithm objects in different n-dimensional spaces

Detailed Description:

Figure 2 is a schematic view of an embodiment of a mapping between correlithm objects 104 in different n-dimensional spaces 102. When implementing a correlithm object processing system, the user device 112 performs operations within the correlithm object domain using correlithm objects 104 in different n-dimensional spaces 102. As an example, the user device 112 may convert different types of data samples having real world values into correlithm objects 104 in different n-dimensional spaces 102. For instance, the user device 112 may convert data samples of text into a first set of correlithm objects 104 in a first n-dimensional space 102 and data samples of audio samples as a second set of correlithm objects 104 in a second n-dimensional space 102. Conventional systems require data samples to be of the same type and/or format in order to perform any kind of operation on the data samples. In some instances, some types of data samples cannot be compared because there is no common format available. For example, conventional computers are unable to compare data samples of images and data samples of audio samples because there is no common format. In contrast, the user device 112 implementing a correlithm object processing system is able to compare and perform operations using correlithm objects 104 in the correlithm object domain regardless of the type or format of the original data samples

 

In Figure 2, a first set of correlithm objects 204 are defined within a first n-dimensional space 212 and a second set of correlithm objects 208 are defined within a second n-dimensional space 210. The n-dimensional spaces may have the same number dimensions or a different number of dimensions. For example, the first n-dimensional space 212 and the second n-dimensional space 210 may both be three dimensional spaces. As another example, the first n-dimensional space 212 may be a three dimensional space and the second n-dimensional space 210 may be a nine dimensional space. Correlithm objects 104 in the first n-dimensional space 212 and second n-dimensional space 210 are mapped to each other. In other words, a correlithm object 204 in the first n-dimensional space 212 may reference or be linked with a particular correlithm object 208 in the second n-dimensional space 210. The correlithm objects 104 may also be linked with and referenced with other correlithm objects 104 in other n-dimensional spaces 102. 

 

In one embodiment, a data structure such as table 200 may be used to map or linkcorrelithm objects 194 in different n-dimensional spaces 102. In some instances, table 200 is referred to as a node table. Table 200 is generally configured to identify a first plurality of correlithm objects 104 in a first n-dimensional space 102 and a second plurality of correlithm objects 104 in a second n-dimensional space 102. Each correlithm object 104 in the first n-dimensional space 102 is linked with a correlithm object 104 is the second n-dimensional space 102. For example, table 200 may be configured with a first column 202 that lists correlithm objects 204 as source correlithm objects and a second column 204 that lists corresponding correlithm objects 208 as target correlithm objects. In other examples, table 200 may be configured in any other suitable manner or may be implemented using any other suitable data structure. In some embodiments, one or more mapping functions may be used to convert between a correlithm object 104 in a first n-dimensional space and a correlithm object 104 is a second n-dimensional space

Brief Description:

Figure 3 is a schematic view of an embodiment of a correlithm object processing system;

Detailed Description:

Figure 3 is a schematic view of an embodiment of a correlithm object processing system 300 that is implemented by a user device 112 to perform operations using correlithm objects 104. The system 300 generally comprises a sensor108, a node 304, and an actor 110. The system 300 may be configured with any suitable number and/or configuration of sensors108, nodes 304, and actors 110. An example of the system 300 in operation is described in Figure 4. In one embodiment, a sensor108, a node 304, and an actor 110 may all be implemented on the same device (e.g. user device 112). In other embodiments, a sensor108, a node 304, and an actor 110 may each be implemented on different devices in signal communication with each other for example over a network. In other embodiments, different devices may be configured to implement any combination of sensors108, nodes 304, and actors 110. 

 

Sensors108 serve as interfaces that allow a user device 112 to convert real world data samples into correlithm objects 104 that can be used in the correlithm object domain. Sensors108 enable the user device 112 compare and perform operations using correlithm objects 104 regardless of the data typetype or format of the original data sample. Sensors108 are configured to receive a real world value 320 representing a data sample as an input, to determine a correlithm object 104 based on the real world value 320, and to output the correlithm object 104. For example, the sensor108 may receive an image324 of a person and output a correlithm object 322 to the node 304or actor 110. In one embodiment, sensors108 are configured to use sensor tables 308 that link a plurality of real world values with a plurality of correlithm objects 104 in an n-dimensional space 102. Real world values are any type of signal, value, or representation of data samples. Examples of real world values include, but are not limited to, images, pixel values, text, audio signals, electrical signals, and biometric signals. As an example, a sensor table 308 may be configured with a first column 312 that lists real world value entries corresponding with different images and a second column 314 that lists corresponding correlithm objects 104 as input correlithm objects. In other examples, sensor tables 308 may be configured in any other suitable manner or may be implemented using any other suitable data structure. In some embodiments, one or more mapping functions may be used to translate between a real world value 320 and a correlithm object 104 is an n-dimensional space 102. Additional information for implementing or emulating a sensor108 in hardware is described in Figure 5

 

Nodes 304 are configured to receive a correlithm object 104 (e.g. an input correlithm object 314), to determine another correlithm object 104 based on the received correlithm object 104, and to output the identified correlithm object 104 (e.g. an output correlithm object 316). In one embodiment, nodes 304 are configured to use node tables 200 that link a plurality of correlithm objects 104 from a first n-dimensional space 102 with a plurality of correlithm objects 104 in a second n-dimensional space 102. A node table 200 may be configured similar to the table 200 described in Figure 2. Additional information for implementing or emulating a node 304 in hardware is described in Figure 5

 

Actors 110 serve as interfaces that allow a user device 112 to convert correlithm objects 104 in the correlithm object domain back to real world values or data samples. Actors 110 enable the user device 112 to convert from correlithm objects 104 into any suitable type of real world value. Actors 110 are configured to receive a correlithm object 104 (e.g. an output correlithm object 316), to determine a real world output value 322 based on the received correlithm object 104, and to output the real world output value 322. The real world output value 322 may be a different data type or representation of the original data sample. As an example, the real world input value 320 may be an image324 of a person and the resulting real world output value 322 may be text 326 and/or an audio signal identifying the person. In one embodiment, actors 110 are configured to use actor tables 310 that link a plurality of correlithm objects 104 in an n-dimensional space 102 with a plurality of real world values. As an example, an actor table 310 may be configured with a first column 316 that lists correlithm objects 104 as output correlithm objects and a second column 318 that lists real world values. In other examples, actor tables 310 may be configured in any other suitable manner or may be implemented using any other suitable data structure. In some embodiments, one or more mapping functions may be employed to translate between a correlithm object 104 in an n-dimensional space and a real world output value 322. Additional information for implementing or emulating an actor 110 in hardware is described in Figure 5

 

A correlithm object processing system 300 uses a combination of a sensor table 308, a node table 200, and/or an actor table 310 to provide a specific set of rules that improve computer-related technologies by enabling devices to compare and to determine the degree of similarity between different data samples regardless of the data type and/or format of the data sample they represent. The ability to directly compare data samples having different data types and/or formatting is a new functionality that cannot be performed using conventional computing systems and data structures. Conventional systems require data samples to be of the same type and/or format in order to perform any kind of operation on the data samples. In some instances, some types of data samples are incompatible with each other and cannot be compared because there is no common format available. For example, conventional computers are unable to compare data samples of images with data samples of audio samples because there is no common format available. In contrast, a deviceimplementing a correlithm object processing system uses a combination of a sensor table 308, a node table 200, and/or an actor table 310 to compare and perform operations using correlithm objects 104 in the correlithm object domain regardless of the type or format of the original data samples. The correlithm object processing system 300 uses a combination of a sensor table 308, a node table 200, and/or an actor table 310 as a specific set of rules that provides a particular solution to dealing with different types of data samples and allows devices to perform operations on different types of data samples using correlithm objects 104 in the correlithm object domain. In some instances, comparing data samples as correlithm objects 104 is computationally more efficient and faster than comparing data samples in their original format. Thus, using correlithm objects 104 to represent data samples provides increased flexibility and improved performance compared to using other conventional data structures. The specific set of rules used by the correlithm object processing system 300 go beyond simply using routine and conventional activities in order to achieve this new functionality and performance improvements

 

In addition, correlithm object processing system 300 uses a combination of a sensor table 308, a node table 200, and/or an actor table 310 to provide a particular manner for transforming data samples between ordinal number representations and correlithm objects 104 in a correlithm object domain. For example, the correlithm object processing system 300 may be configured to transform a representation of a data sample into a correlithm object 104, to perform various operations using the correlithm object 104 in the correlithm object domain, and to transform a resulting correlithm object 104 into another representation of a data sample. Transforming data samples between ordinal number representations and correlithm objects 104 involves fundamentally changing the data type of data samples between an ordinal number system and a categorical number system to achieve the previously described benefits of the correlithm object processing system 300. 

Brief Description:

Figure 4 is a protocol diagram of an embodiment of a correlithm object process flow

Detailed Description:

Figure 4 is a protocol diagram of an embodiment of a correlithm object process flow 400. A user device 112 implements process flow 400 to emulate a correlithm object processing system 300 to perform operations using correlithm object 104 such as facial recognition. The user device 112 implements process flow 400 to compare different data samples (e.g. images, voice signals, or text) are to each other and to identify other objects based on the comparison. Process flow 400 provides instructions that allows user devices 112 to achieve the improved technical benefits of a correlithm object processing system 300. 

 

Conventional systems are configured to use ordinal numbers for identifying different data samples. Ordinal based number systems only provide information about the sequence order of numbers based on their numeric values, and do not provide any information about any other types of relationships for the data samples being represented by the numeric values such as similarity. In contrast, a user device 112 can implement or emulate the correlithm object processing system 300 which provides an unconventional solution that uses categorical numbers and correlithm objects 104 to represent data samples. For example, the system 300 may be configured to use binary integers as categorical numbers to generate correlithm objects 104 which enables the user device 112 to perform operations directly based on similarities between different data samples. Categorical numbers provide information about how similar different data sample are from each other. Correlithm objects 104 generated using categorical numbers can be used directly by the system 300 for determining how similar different data samples are from each other without relying on exact matches, having a common data type or format, or conventional signal processing techniques

 

A non-limiting example is provided to illustrate how the user device 112 implements process flow 400 to emulate a correlithm object processing system 300 to perform facial recognition on an image to determine the identity of the person in the image. In other examples, the user device 112 may implement process flow 400 to emulate a correlithm object processing system 300 to perform voice recognition, text recognition, or any other operation that compares different objects

 

At step 402, a sensor108 receives an input signal representing a data sample. For example, the sensor108 receives an image of person’sface as a real world input value 320. The input signal may be in any suitable data type or format. In one embodiment, the sensor108 may obtain the input signal in real-time from a peripheral device (e.g. a camera). In another embodiment, the sensor108 may obtain the input signal from a memory or database

 

At step 404, the sensor108 identifies a real world value entry in a sensor table 308 based on the input signal. In one embodiment, the system 300 identifies a real world value entry in the sensor table 308 that matches the input signal. For example, the real world value entries may comprise previously stored images. The sensor108 may compare the received image to the previously stored images to identify a real world value entry that matches the received image. In one embodiment, when the sensor108 does not find an exact match, the sensor108 finds a real world value entry that closest matches the received image

 

At step 406, the sensor108 identifies and fetches an input correlithm object 314 in the sensor table 308 linked with the real world value entry. At step 408, the sensor108 sends the identified input correlithm object 314 to the node 304. In one embodiment, the identified input correlithm object 314 is represented in the sensor table 308 using a categorical binary integer string. The sensor108 sends the binary string representing to the identified input correlithm object 314 to the node 304

 

At step 410, the node 304 receives the input correlithm object 314 and determines distances 106 between the input correlithm object 314 and each source correlithm object 104 in a node table 200. In one embodiment, the distance 106 between two correlithm objectscorrelithm objects 104 can be determined based on the differences between the bits of the two correlithm objectscorrelithm objects 104. In other words, the distance 106 between two correlithm objects can be determined based on how many individual bits differ between a pair of correlithm objects 104. The distance 106 between two correlithm objectscorrelithm objects 104 can be computed using hamming distance or any other suitable technique. In another embodiment, the distance 106 between two correlithm objectscorrelithm objects 104 can be determined using a minkowski distance such as the Euclidean or “straight-line” distance between the correlithm objects 104. For example, the distance 106 between a pair of correlithm objects 104 may be determined by calculating the square root of the sum of squares of the coordinate difference in each dimension

 

At step 412, the node 304 identifies a source correlithm object 104 from the node table 200 with the shortest distance 106. A source correlithm object 104 with the shortest distance from the input correlithm object 314 is a correlithm object 104 either matches or most closely matches the received input correlithm object 314. 

 

At step 414, the node 304 identifies and fetches a target correlithm object 206 in the node table 200 linked with the source correlithm object 104. At step 416, the node 304 outputs the identified target correlithm object 206 to the actor 110. In this example, the identified target correlithm object 206 is represented in the node table 200 using a categorical binary integer string. The node 304 sends the binary string representing to the identified target correlithm object 206 to the actor 110. 

 

At step 418, the actor 110 receives the target correlithm object 206 and determines distances between the target correlithm object 206 and each output correlithm object 316 in an actor table 310. The actor 110 may compute the distances between the target correlithm object 206 and each output correlithm object 316 in an actor table 310 using a process similar to the process described in step 410. 

 

At step 420, the actor 110 identifies an output correlithm object 316 from the actor table 310 with the shortest distance 106. An output correlithm object 316 with the shortest distance from the target correlithm object 206 is a correlithm object 206 either matches or most closely matches the received target correlithm object 206. 

 

At step 422, the actor 110 identifies and fetches a real world output value in the actor table 310 linked with the output correlithm object 316. The real world output value may be any suitable type of data sample that corresponds with the original input signal. For example, the real world output value may be text that indicates the name of the person in the image or some other identifier associated with the person in the image. As another example, the real world output value may be an audio signal or sample of the name of the person in the image. In other examples, the real world output value may be any other suitable real world signal or value that corresponds with the original input signal. The real world output value may be in any suitable data type or format

 

At step 424, the actor 110 outputs the identified real world output value. In one embodiment, the actor 110 may output the real world output value in real-time to a peripheral device (e.g. a display or a speaker). In one embodiment, the actor 110 may output the real world output value to a memory or database. In one embodiment, the real world output value is sent to another sensor108. For example, the real world output value may be sent to another sensor108 as an input for another process

Brief Description:

Figure 5 is a schematic diagram of an embodiment a computer architecture for emulating a correlithm object processing system;

Detailed Description:

Figure 5 is a schematic diagram of an embodiment a computer architecture 500 for emulating a correlithm object processing system 300 in a user device 112. The computer architecture 500 comprises a processor 502, a memory 504, a network interface 506, and an input-output (I/O) interface 508. The computer architecture 500 may be configured as shown or in any other suitable configuration

 

The processor 502 comprises one or more processors operably coupled to the memory 504. The processor 502 is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), applicationspecific integrated circuits (ASICs), graphics processing units (GPUs), or digital signal processors (DSPs). The processor 502 may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The processor 502 is communicatively coupled to and in signal communication with the memory 204. The one or more processors are configured to processdata and may be implemented in hardware or software. For example, the processor 502 may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. The processor 502 may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components

 

The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute instructions to implement sensor engines 510, delay node engines 528, node engines 512, boss engines 530, and actor engines 514. In an embodiment, the sensor engines 510, the node engines 512, and the actor engines 514 are implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware

 

In one embodiment, the sensor engine 510 is configured to receive a real world value 320 as an input, to determine a correlithm object 206 based on the real world value 320, and to output the correlithm object 206. Examples of the sensor engine 510 in operation are described in Figure 4

 

In one embodiment, the node engine 512 is configured to receive a correlithm object 206 (e.g. an input correlithm object 206), to determine another correlithm object 206 based on the received correlithm object 206, and to output the identified correlithm object 206 (e.g. an output correlithm object 316). The node engine 512 is also configured to compute distances between pairs of correlithm objects 206. 

 

In one embodiment, the delay node engine 528 is configured to receive a correlithm object 206 and then output the correlithm object 206 after a predetermined amount of time has elapsed. In other words, the delay node engine 528 is configured to provide delays or delay lines for a correlithm object processing system. Examples of the delay node engine 528 in operation are described in FIGS. 6-11. 

 

In one embodiment, the boss engine 530 is configured to control and synchronize components within a correlithm object processing system. The boss engine 530 is configured to send commands (e.g. execute commands or output commands) to components within a correlithm object processing system to control their operation. Examples of the boss engine 530 in operation are described in FIGS. 14-17. 

 

In one embodiment, the actor engine 514 is configured to receive a correlithm object 206 (e.g. an output correlithm object 316), to determine a real world output value 322 based on the received correlithm object 206, and to output the real world output value 322. Examples of the actor engine 514 in operation are described in Figure 4.

 

The memory 504 comprises one or more non-transitory disks, tape drives, or solid-state drives, and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory 504 may be volatile or non-volatile and may comprise read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). The memory 504 is operable to store sensor instructions 516, node instructions 518, actor instructions 520, sensor tables 308, node tables 200, actor tables 310, and/or any other data or instructions. The sensor instructions 516, the node instructions 518, the delay node instructions 522, the boss instructions 524, and the actor instructions 520 comprise any suitable set of instructions, logic, rules, or code operable to execute the sensor engine 510, node engine 512, the delay node engine 528, the boss engine 530, and the actor engine 514, respectively. 

 

The sensor tables 308, the node tables 200, and the actor tables 310 may be configured similar to the sensor tables 308, the node tables 200, and the actor tables 310 described in Figure 3, respectively. The boss table 526 generally comprises a list of components within a correlithm object processing system. Additional information about boss tables 526 is described in FIGS. 14-17. 

 

The network interface 506 is configured to enable wired and/or wireless communications. The network interface 506 is configured to communicate data with any other device or system. For example, the network interface 506 may be configured for communication with a modem, a switch, a router, a bridge, a server, or a client. The processor 502 is configured to send and receive data using the network interface 506

 

The I/O interface 508 may comprise ports, transmitters, receivers, transceivers, or any other devices for transmitting and/or receiving data with peripheral devices as would be appreciated by one of ordinary skill in the art upon viewing this disclosure. For example, the I/O interface 508 may be configured to communicate data between the processor 502 and peripheral hardware such as a graphical user interface, a display, a mouse, a keyboard, a key pad, and a touch sensor (e.g. a touch screen). 


Parts List

102

n-dimensional space

104

correlithm objects

106

distance

108

undefined

110

sensor

112

actor

114

user device

200

node table

202

source correlithm objects

204

correlithm objects

206

target correlithm object

208

correlithm object

210

n-dimensional space

212

n-dimensional space

300

item

302

sensor

304

node

306

actor

308

sensor table

310

actor tables

312

real world value

314

input correlithm object

316

output correlithm objects

318

real world values

320

item

322

item

324

326

item

400

correlithm object process flow

402

block

404

block

406

block

408

block

410

block

412

block

414

block

416

block

418

block

420

block

422

block

424

block

500

computer architecture

502

processor

504

memory

506

network interface

508

I/O interface

510

sensor engine

512

node engines

514

actor engines

516

sensor instructions

518

node instructions

520

actor instructions

522

delay node instructions

524

boss instructions

526

boss table

528

delay node engine

530

boss engine


Terms/Definitions

solid-state drives

touch sensor

step

text recognition

DSPs

similarities

electrical signals

TCAM

different number

special purpose machine

schematic view

schematic diagram

first correlithm object

particular correlithm object

counting

microprocessor

delay node engine

formatting

processing resources

degree

logical high values

two different people

conventional systems

unconventional manner

improvements

exact match

sensor table

input correlithm object

audio signals

different devices

indexing

one or more central processing unit

implementing

binary result

capabilities

traditional data

other suitable real world signal or value

other suitable architecture

person’s

randomly chosen points

display

data

relationships

types

facial recognition

minkowski distance

conventional data structures

previously stored input values

unconventional configuration

various embodiments

various instructions

, nodes

other operation

random-access memory

image

control unit

other components

closest matches

perspective view

represent data samples

related data samples

transceivers

audio signal

same number dimensions

biometric signals

probability

peripheral devices

identified correlithm object

dissimilar one data sample

facial hair

engines

similar different data sample

suitable type

complex signal processing techniques

performance improvements

e.g. user device

square root

comparing data samples

security or authentication applications

binary comparisons

input correlithm objects

data samples provides

system

routine

switch

other suitable data structure

search engine

entry

one or more non-transitory disks

different objects

correlithm processing system

ALU and store

shown

resources

other features

certain statistical tests

second plurality

exact matches

boss table

microcontroller

images and data samples

other words

correlithm object processing system results

similar data samples

points

previously described benefits

facial recognition application

mouse

nodes

instructions and data

glasses

n-dimensional space

processing power

state machines

operation

other respects

signal communication

communication

correlithm object process flow

engine

“straight-line” distance

their similarities

common format

other data sample

other adjacent correlithm objects

non-binary data set comparisons

time

other suitable number

several technical challenges

correlation

correlithm object processing system

audio signal or sample

special purpose computer

particular solution

speed and performance

their numeric values

name

identified target correlithm object

, boss engines

received image

resulting correlithm object

rules

distances

two data samples

people

numeric values

search tokens

tape drives

source

person and output

configuration

complex processes

other n-dimensional spaces

actor instructions

statistical properties

sensor engines

other known data samples

signal processing techniques

similarity

program execution

different data samples

processing power and resources

identifying matches

input data sample matches

programmable logic device

laptop computers

disclosure

one or more mapping functions

problems

e.g. data samples

first 10-bit string

real world value

graphical user interface

high-dimensional space

process flow

different n-dimensional spaces

feature

user device

previously stored images

processing systems

new functionality

audio samples

device stores

objects

technical solution

correlithm object

delays or delay lines

source correlithm objects

one or more processors

three dimensional spaces

flexibility

peripheral device

their original format

other suitable manner

input data samples

representation

same type and/or format

only one bit differs

pair

example

predetermined amount

sensor engine

sensor

FPGAs

code

embodiment

different data types and/or formats

biometric information

read-only memory

actor tables

boss engine

technical challenges

ordinal based number systems

source correlithm object

absence

comparison

input signal

process

computing device

suitable combination

DRAM

ordinal number values

delay node instructions

keyboard

ASICs

additional information

different images

speaker

categorical number system

aspects

non-limiting example

actor table

sequence order

conventional text-based search engine

specific set

instances

correlithm

hardware or software

other suitable dimension space

input data sample

unconventional solution

different data type

receivers

one embodiment

received input correlithm object

64-bit string

real world input value

results

registers

same person

data sample

suitable set

significant amounts

text

wireless communications

information

processing

device

or actor

exact partial matches

ordinal number representations

real-time

output commands

kind

50-dimensional space

second correlithm object

images

faster search time

problem

multi-core processor

characteristics

data sets

more sensors

signal, value

other types

conventional data processing systems

camera

coordinated operations

target correlithm object

node engine

combination

more detail

touch screen

embodiments

actor

node instructions

their correlithm objects

3-dimensional space

normal distribution

traditional data processing paradigm

pair correlithm objects

three dimensional space

improved results

10-dimensional space

client

desktop computers

electronic circuitry

memory

type

suitable data type

memory or database

dimension

peripheral hardware

other data or instructions

ALU operations

differences

actors 306 serve

result

significant amount

binary string

other benefits

contrast

different data types

real world output value

table

categorical binary strings

unknown person

second n-dimensional space

match

non-binary comparisons

improved performance

bridge

respects

identified real world output value

ternary content-addressable memory

real world value entries

fetches instructions

received correlithm object

instructions

e.g. floating point

arithmetic and logic operations

devices

other devices

sensor instructions

other suitable hardware

second 10-bit string

high-dimensional binary cube

processor

common data type

boss tables

time consuming

files

conventional signal processing techniques

hardware

technical advantage

technological improvements

space

processor registers

digital signal processors

target correlithm objects

64-dimensional space

computers

others

other objects

transmitters

conventional computer

correlithm object domain

same device

other examples

input-output

data type

distance

original data sample

two images

computer technologies

sensors

node tables

numbers

faster search times

received target correlithm object

e.g. images

amount

node table

matches

dimensions

first column

cores

field-programmable gate array

SRAM

specific integrated circuits

addition

person

original input signal

computing devices

instance

user device’s

conventional computers

boss instructions

computer-related technologies

actor engine

nine dimensional space

cases

categorical numbers

network

ports

voice signals

bits

coordinate difference

number

detail

other correlithm objects

third correlithm object

benefits

binary integers

two correlithm objects

examples

other special purpose computer platforms

plurality

conventional search engines

router

shortest distance

node engines

resulting real world output value

second column

improved technical benefits

application

pairs

I/O interface

hamming distance

other identifier

data structure

ordinary skill

mobile phones

n-dimensional spaces

pixel values

arithmetic logic unit

modem

node

conventional computing systems

static random-access memory

identified input correlithm object

real world values

second example

squares

functionality

other device or system

actors

ordinal binary integers

value

other embodiments

user devices

operations

programs

actor engines

different lighting conditions

data structures

network interface

categorical binary integer string

components

ordinal numbers

commands

mapping

binary comparison

search engines

follows

over-flow data storage device

other conventional data structures

GPUs

suitable number and/or configuration

real world data samples

such programs

protocol diagram

ordinal number system

first n-dimensional space

similar characteristics or features

other suitable technique

operands

search token

third 10-bit string

original data samples

voice recognition

server

interfaces

image processing

paradigms

other number systems

XOR operation

second set

their original data type and/or format

different types

preceding

“n-space

other suitable configuration

10-bit string

performing operations

increased flexibility

similar different data samples

link

their corresponding correlithm objects

many individual bits

previous example

only data samples

logic

output correlithm objects

new types

first plurality

input

data samples

electric signals

mathematical calculations

known people

various operations

point

particular manner

real world value entry

other suitable type

FIGS

complicated signal processing

computer architecture

output correlithm object

identity

100-dimensional space

i.e. match

conventional activities

graphics processing units

trivial task

first set

data set

tablet computers

face

string

different numeric values

dynamic random-access memory

transforming data samples

list

sensor tables

real world values or representations

format

correlithm objects

numerical order

their operation

logic units

type or format

execution

other data samples

Package Transport by Unmanned Aerial Vehicles


Drawings

Brief Description:

Figure 1 depicts a high level view of a system for performing package transport services in accordance with an embodiment

Detailed Description:

Referring now to Figure 1, a high level view of a system 100 for performing package transport services is generally shown in accordance with an embodiment. The system 100 includes a plurality of unmanned aerial vehicles (UAV(s) 102) and a plurality of package docking device(s) 104, each of which is communicatively coupled to one or more network(s) 108. A UAV(s) 102 refers to an unmanned aircraft whose flight is autonomously controlled through onboard computer systems. In an embodiment, a portion of the flight control may be implemented remotely through interaction with a ground station (not shown). The UAV(s) 102 include physical components and related circuitry configured to pick up, carry, and drop off packages

The package docking device(s) 104 refer to structures used in assisting UAVs in implementing corresponding docking functions. A package docking device(s) 104 may be assigned to or otherwise controlled by an end user of the package transport services. A package docking device(s) 104 can be identified by the UAVs based on a unique identifier that is assigned to the package docking device(s) 104 and which identifier is communicatively conveyed to the UAV(s) 102 over a network at the time of a package transfer operation, as will be described further herein. As indicated above, the package docking devices may be permanent or semi-permanent fixed structures or may be portable structures that are lightweight and can be carried by a human.

The network(s) 108 may be any type of known networks including, but not limited to, a wide area network (WAN), a local area network (LAN), a global network (e.g. Internet), a virtual private network (VPN), and an intranet. The network(s) 108 may be implemented using wireless networks or any kind of physical network implementation known in the art, e.g., using cellular, satellite, and/or terrestrial network technologies. The network(s) 108 may also include short rangewireless networks utilizing, e.g., BLUETOOTH.TM. and WI-FI.TM. technologies and protocols. In one embodiment, the UAV(s) 102 communicate with the package docking device(s) 104 over a short-range wireless network, while the UAV(s) 102 communicate with other network entities, such as the package transport services provider, over a long-range network (e.g., satellite or cellular). 

The system 100 also includes a host system computer 106, a personal computer 112, and a mobile device 114, each of which is communicatively coupled to one or more of the network(s) 108. The host system computer 106 may be implemented as one or more high-speed computer processing devices, such as one or more mainframe computers capable of handling a high volume of activities conducted on behalf of end users of the package transport services. The host system computer 106 implements an application 116 to centrally manage the package transport services described herein. The application 116 includes a user interface 118 that is presented to end users via the personal computer 112 and the mobile device 114. The user interface 118 is described further in Figure 6

In one embodiment, the host system computer 106 may be implemented by an entity that sells goods to consumers. Alternatively, the host system computer 106 may be implemented by a third-party service provider that provides the package transport services as an intermediary between the seller entity and the consumers. In another embodiment, the host system computer 106 may be implemented by a non-commercial entity, e.g., for situations in which packages (such as food or medical supplies) need to be transferred between locations as part of an emergency condition where first responders are unable to gain access to various roads or locations. For purposes of illustration, the package transport services are described herein with respect to a commerce application

The personal computer 112 may be implemented as a general-purpose desktop or laptop computer. An end user consumer may access the user interface 118 of the host system computer 106 via a web browser operating on the personal computer 112. The end user may ordergoods from the host system computer 106, as well as schedule delivery of the goods, as will be described further herein. 

The mobile device 114 refers to a portable, wireless communications device, such as a smart phone, personal digital assistant, or tablet PC. Similar to the personal computer 112, the end user may access the user interface 118 of the host system computer 106 via a web browser operating on the mobile device 114 to ordergoods and schedule deliveries. In an embodiment, the mobile device 114 includes a global positioning system (GPS) that enables a UAV(s) 102 to locate a package docking device(s) 104 associated with the mobile device 114, as will be described further herein. 

A storage device 110 is coupled to the host system computer 106 and may be alternatively coupled to the host system computer 106 via one or more of the network(s) 108. The storage device 110stores a variety of data used by the host system computer 106 in implementing the package transport services described herein. As shown in Figure 1, the storage device 110storesorders generated for end users, as well as transaction records. The transaction records provide information about completed orders. It is understood that the storage device 110 may be implemented using memory contained in the host system computer 106 or may be a separate physical device. The storage device 110 is logically addressable as a consolidated data source across a distributed environment that includes the network(s) 108

The host system computer 106 operates as a database server and coordinates access to application data including data stored in the storage device 110. The host system computer 106 may be implemented using one or more servers operating in response to a computer program stored in a storage medium accessible by the server. The host system computer 106 may operate as a network server (e.g., a web server) to communicate with the personal computer 112 and the mobile device 114 and other network entities

As indicated above, the package docking device(s) 104 may be a secured structure that is permanently or semi-permanently installed at a fixed location, such as an area of a real estate, an apartment building rooftop, etc., and is described in figure 2. In an alternative embodiment, the package docking device(s) 104 is implemented as a portable device, which is described in Figure 3. Turning now to Figure 2 and Figure 3, perspective views of package docking device A 200 and portable package docking device 300, respectively, will now be described. 

Brief Description:

 Figure 2 depicts a perspective view of a package docking device in accordance with an embodiment

Detailed Description:

The package docking device A 200 of Figure 2 includes a housing 208, an opening 204 for receiving a package, and a door 206 for securing the package in the housing 208. An upper wall 202, or roof, of the housing 208 may be used as a landing site for the UAV. The package docking device A 200 may be constructed of a durable material, such as metal, and may be mounted or fixed to another permanent structure (e.g., a building or concrete base) to prevent theft or tampering. The door 206 is closed and locked to secure delivered packages, and may be opened by the UAV(s) 102 using a security key. It will be understood that the configuration shown in Figure 2 is not limited thereto. For example, in other configurations, the package docking device A 200 may have various shapes, sizes, and dimensions. Further, an additional panel or structure may be installed near the opening 204 such that the UAV lands on the panel within a close proximity of the opening 204 to facilitate hand off of the package

Brief Description:

Figure 3 depicts a perspective view of a portable package docking device in accordance with an embodiment;

Detailed Description:

The portable package docking device 300 of Figure 3 is constructed of a lightweight and flexible material to provide ease of portability. The portable package docking device 300 may be implemented as a substantially flat structure with a thickness that is narrow enough to enable the portable package docking device 300 to be rolled up for portability and storage. Dimensions of the portable package docking device 300 may vary based on applications of its use. In one non-limiting embodiment, the length and width of the portable package docking device 300 is sized substantially similar to the length and width of a beach towel. Details of the various components of the package docking device A 200 and portable package docking device 300 are described further in Figure 5

Brief Description:

Figure 4 depicts a block diagram of an unmanned aerial vehicle (UAV) in accordance with an embodiment

Detailed Description:

Turning now to Figure 4, a UAV 102 (102) will now be described in an embodiment. The UAV 102 includes communication components 404, a control processor 406, and memory 408. The memory 408stores a transaction packet 410, a transaction record 412, and an application 414. The application 414 is executable by the control processor 406 to coordinate the functions of the UAV 102 as described herein. The control processor 406 is communicatively coupled to the circuitry of the UAV 102 to receive operational data from components of the UAV 102, such as data indicating the activation of landing gear or the physical engagement of the landing gear at a package docking device

The communication components 404 include an antenna configured to receive communications from the host system computer 106 over one or more of the network(s) 108. The communications may include instructions associated with a package transfer operation. The package transfer operation refers to the pickup and delivery of a package to a target package docking device as defined by GPS coordinates (and vertical scale information that provides altitude data corresponding to the delivery point) and a device identifier of the package docking device(s) 104. The instructions include the GPS coordinates, vertical scale data, and the identifier of the package docking device(s) 104 to which the package will be delivered. The instructions may also include an identification of an order corresponding to the package that differentiates between orders placed for two or more items by the same consumer. In an embodiment, the instructions may further include individual identification of items within an order. For example, a package may contain a partial order due to weight restrictions placed on the UAV, or because an item is not available at the precise time of transport, or because the items are picked up from multiple geographically-dispersed locations for transport. If a package contains a partial order, the UAV may contain instructions that identify those items of the order that are being transported by the UAV. The instructions may also indicate that the partial order reflects `x` of `y` items in an order being delivered. If the package docking system is a secure device, the instructions may also include a security key, as described further herein. 

These instructions are conveyed to the UAV(s) 102 from the host system computer 106 as a transaction packet 410. In addition, the communications enabled by the antenna include communications from the UAV(s) 102 to the host system computer 106. For example, upon completing a package transfer operation, the UAV(s) 102 may send associated information (e.g., transaction recording, time/date stamp, etc.) to the host system computer 106, which is stored in the storage device 110 as a transaction record. Alternatively, the information may be stored as the transaction record 412 in the memory 408 of the UAV 102. 

The communications components 404 also include an antenna configured to send short-range wireless communications to the package docking device(s) 104. In an embodiment, when the UAV 102 reaches its destination defined by the GPS coordinates, it may send a communication to discover the presence of the package docking device(s) 104. Alternatively, the package docking device(s) 104 may be configured to periodically send out signals to enable its discovery. Once the UAV 102 has discovered the presence of a package docking device(s) 104, the UAV 102 requests the device identifier of the discovered device 104. The device identifier received from the package docking device(s) 104 is compared against the device identifier in the transaction packet 410 to ensure that the package is delivered to the correct package docking device. The communications components 404 may further include an adapter configured to translate radio signals from the package docking device(s) 104 to data that is stored in the memory 408 of the UAV 102. 

In an embodiment, the communications components 404 may include WI-FI components that are initiated when no package docking device is discovered, or alternatively, if the device identifier received by the UAV 102 is different than the device identifier in the transaction packet 410. The first scenario may occur if the package docking device has been moved to a new location. The latter scenario may occur if the package docking device has been moved from its location and another package docking device is subsequently placed in that location. The latter scenario may otherwise occur when two package docking devices are located in very close proximity to each other (e.g., within a few feet), such that the GPS coordinates span the physical locations of both package docking devices and the UAV 102 receives the device identifier from the wrong package docking device. In any of these scenarios, the UAV 102 may utilize the WI-FI components to send a signal searching for the package docking device

Optionally, the UAV 102 may include a video recording device 402 to record package transfer operations. The video recording device 402 may be configured through prompts from the application 414 to begin recording, e.g., when the application 414 receives an indication that the landing apparatus of the UAV 102 has been triggered. The video recording device 402 may be prompted through the application 414 to discontinue recording, when the application 414 received an indication that the package transfer operation (i.e., successful delivery of the package to the package docking device(s) 104) is complete. The recording may be stored as a transaction record 412 in the memory 408 of the UAV 102. In addition, the recording may be transmitted through the communication components 404 to the package docking device(s) 104 if the package docking device(s) 104 is equipped to receive the recording. The transaction record 412 may include other information including a time/date stamp of the delivery, as well as transaction details (e.g., invoicing information, billing and payment information, etc.). In addition, the transaction record 412 may be supplemented with electronic coupons or advertisements for goods offered by the seller or through affiliates of the seller, if desired. 

Brief Description:

Figure 5 depicts a block diagram of components of the package docking devices of Figure 2 and Figure 3 in accordance with an embodiment

Detailed Description:

Turning now to Figure 5, a package docking device 104 (104,  package docking device A 200, portable package docking device 300) will now be described. The package docking device 104, whether portable or fixed, includes communication component(s) 404, a processor 508, and memory 512. If the package docking device 104 is a portable device, the communications components 404 may include a GPS device that is used by the UAV 102 to track the location of the package docking device 104. Alternatively, a GPS system on mobile device 114 associated with the package docking device 104 may be used to enable the UAV 102 to track the location of the package docking device 104, assuming that the mobile device 114 is in close proximity with the package docking device 104. 

The communication components 404 include an antenna configured to receive communications from the UAV 102 over a short-range network (e.g., BLUETOOTH). For example, the package docking device 104 may receive prompts from the UAV 102 to discover its presence at a GPS location. The package docking device 104 may receive requests for the device identifier 510 of the package docking device 104. Further, upon completing a package transfer operation, the UAV 102 may send associated information (e.g., transaction recording, time/date stamp, etc.) to the package docking device 104, which may be stored in the memory 512 as a transaction record 412

The memory 512stores a device identifier 510, a transaction record 412, and an application 414. The application 414 is executable by the processor 508 to coordinate the functions of the package docking device 104 described herein. The device identifier 510 may be a network address of the package docking device 104. 

The package docking device 104 may optionally include a video recording device 402 for recording package transfer operations. The video recording device 402 may be disposed at a location on the package docking device 104 suitable for capturing the hand off of the package to the package docking device 104. 

If the package docking device 104 is a secured device (e.g., the  package docking device A 200 of Figure 2), the package docking device 104 may include a locking system 502 and a security key 506. The locking system 502 may be implemented as an electronic lock (e.g., using electro-magnetics) using the security key 506 as an authentication means to unlock the package docking device 104. The security key 506 may be provided to the host system computer 106 as part of the order process

In an embodiment, the package docking device 104 may include a detection sensor 504 that is configured to detect a landing function of the UAV 102. For example, the detection sensor 504 may be a weight sensor that is disposed on the landing panel (e.g., upper wall 202 or roof of package docking device A 200 in Figure 2). The detection of weight may indicate to the package docking device that the UAV 102 has landed, and the subsequent absence of weight may indicate the departure of the UAV 102. This information may be stored in the package docking device 104 as part of the transaction record 412.

Brief Description:

Figure 6 depicts a user interface for placing and reviewing an order for delivery of a package in accordance with an embodiment; and 

Detailed Description:

As indicated above, the package transport services are managed by the host system computer 106. In an embodiment, an end user of the services may access a website of the host system computer 106 and is presented with a user interface 118 for initiating an order for goods offered by the entity associated with the host system computer 106. A sample user interface screen 600 is shown in Figure 6. An area 602 of the user interface screen 600 is used by the end user to enter order information, and an area 604 of the user interface screen 600 is used by the end user to view the order

In placing an order, the user is prompted to enter GPS coordinates606 of the delivery point in which the package is to be delivered. The end user may also enter vertical scale information 608 in the form of the sea level corresponding to the delivery point. In an embodiment, if the order is placed through a mobile device 114, the GPS coordinates of the mobile device 114 may be transmitted to the host system computer 102 and automatically entered in the corresponding fields of area 602. The user further enters a device identifier 510 of the package docking device to which the package will be delivered. The end user then selects an option “Ship to this destination” 610. The user interface screen 600 also includes an option that allows the end user to select a mailing address for the delivery if desired. The review order information 604 provides a summary of the order details including payment and billing information, as well as discounts. The order information 604 may also include an order identifier 612 assigned to the order. Once the order has been placed, the package transport services include providing order details to a designated UAV for implementing a package transfer operation

Brief Description:

Figure 7 depicts a flow diagram of a process for implementing package transport operations in accordance with an embodiment

Detailed Description:

Turning now to Figure 7, a flow diagram of a process 700 for implementing the package transfer operations will now be described in an embodiment. The process described in Figure 7 assumes that the UAV 102 has picked up the package subject to an order placed, e.g., via the user interface screen 600 of Figure 6

At block 702, the UAV 102 receives a transaction packet (e.g., packet 410 of Figure 4) for the package transfer operation. The transaction packet includes the GPS coordinates and the device identifier of the package docking device associated with the package transfer request. The transaction packet may be stored in the memory 408 of the UAV 102. 

At block 704, upon arrival at the delivery point as defined by the GPS coordinates and vertical scale information, the application 414 receives a device identifier from a package docking device located at the delivery point and compares the device identifier for the package docking device located at the delivery point with the device identifier stored in the transaction packet 410

At block 706, the application 414 determines whether the device identifier of the package docking device located at the delivery point matches the device identifier stored in the transaction packet 410. If so, the application 414 directs the UAV 102 to initiate the package transfer operation. The package transfer operation includes a hand off of the package between the UAV and the package docking device. The package transfer operation may also include recording the details of the hand off including the date and time of delivery and/or videorecording the hand off. 

At block 710, once the operation is completed, the application 414 transmits confirmation of the operation to the end user. This may be implemented using various techniques. For example, the confirmation may be directly transmitted to the package docking device over the wireless network. Alternatively, the confirmation may be transmitted by the UAV 102 over satellite, cellular, or other long-range network to the host system computer 106, which then provides the end user with access to the confirmation. In another embodiment, the UAV 102 may store the confirmation along with other confirmations in its memorymemory 408 and upload the confirmations in a batch process to the host system computer 106 at a designated time

If, however, the device identifier of the package docking device at the delivery location does not match the device identifier in the transaction packet 410, at block 712, the application 414 directs the communication components 404 to transmit a request over a short-range (e.g., BLUETOOTH or WI-FI) network. The request may include the device identifier, or network address, of the package docking device. Assuming that the package docking device has not moved outside of the range of communication of the UAV, the package docking device having the network address sends a signal indicating its presence at a new location. The package docking device, or alternatively the mobile device associated with the package docking device, may then send updated GPS location information to the UAV at block 714. The UAV is re-routed to the new location based on the updated GPS coordinates at block 716, and process reverts back to block 704

As indicated above, the package docking device may be a secured system in which a locking system and security key (e.g., locking system 502and security keysecurity key 506, respectively, of Figure 5) is used to gain access to the device. In this embodiment, the process 700 may include additional functions. The control processor 406 may be configured to store the security key associated with the package docking device. In an embodiment, the control processor 406 may be configured to receive an acknowledgement communication from the package docking device upon completion of the package transfer operation indicating the package docking device received the package. In a further embodiment, the control processor 406 may be configured to transmit, upon reaching a pre-defined clearance after departure of the UAV, a request to the package docking device to secure the package by locking the door. In this embodiment, a confirmation of the transaction may be transmitted by the UAV to the package docking device when the UAV receives an acknowledgement that the package is secured. 


Parts List

100

system

102

UAV(s)

104

package docking device(s)

106

host system computer

108

network(s)

110

storage device

112

personal computer

114

mobile device

116

application

118

user interface

200

item

202

upper wall

204

opening

206

door

208

housing

300

portable package docking device

402

video recording device

404

communication component(s)

406

control processor

408

memory

410

transaction packet

412

transaction record

414

application

502

locking system

504

detection sensor

506

security key

508

processor

510

device identifier

512

memory

600

user interface screen

602

area

604

order information

606

608

Sea level:

610

ship to this destination

612

item

700

process

702

block

704

block

706

decision block

708

block

710

block

712

block

714

block

716

block


Terms/Definitions

external computer

technologies

end users

return location

specification

firewalls, switches, gateway computers

BLUETOOTH.TM

server

e.g., satellite or cellular

regard

connection

UAV lands

semi-permanent structure

groups

computer program products

electromagnetic storage device

delivery

web server

computer

portability and storage

one non-limiting embodiment

completed orders

summary

various modifications

communication component(s)

vertical scale information

computer program product

short-range network

goods

addition

special purpose hardware

two or more items

communications

conjunction

antenna

third-party service provider

pickup/return operations

operational data

local area network

GPS device

stores

delivery destinations

dimensions

systems

designated time

order packages

delivery points

laptop computer

savings

copper transmission cables

acknowledgement communication

instruction execution device

package transport operations

sizes

meeting

orders

program instructions

article

oriented programming language

unique identifier

embodiments

delivery point

method

reference

process

intranet

host system

completion

transaction recording

package transport services

firmware instructions

protocols

office

date and time

specified functions or acts

electronic lock

portable compact disc

general purpose computer

personal digital assistant

storage medium (or media

transaction records

wireless network

functionality

user interface screen

portability

source code

terrestrial network technologies

[0055] Computer

aspects

physical network implementation

apparatus

rooftop

docking functions

one more other features

methods

SRAM

mobile device

electro-magnetics

terminology

portion

global network

block

other information

updated GPS coordinates

order details

e.g. Internet

roof

provider

portable package docking devices

other programmable apparatus

provider/service

one or more servers

packet

information

communications components

[0057] These computer

option

transaction packet

flexible material

processor

fixed package docking devices

electronic coupons or advertisements

application data

and security key

GPS system

permanent structure

virtual private network

communication components

database server

wire

network server

human

conventional procedural programming languages

behalf

other transmission media

electronic circuitry

user

item

intermediary

following detailed description

identification

successful delivery

configuration

memory stick

door

logic arrays

[0054] Computer

metal

possible implementations

additional panel or structure

state-setting data

storage

hand

radio waves

destinations

other configurations

landing apparatus

practical application

context

internet

user’s

y` items

acknowledgement

precise time

device

combination

ground station

multiple destinations

specify preferred delivery locations

authentication means

subsequent absence

wide area network

building or concrete base

GPS coordinates

“C” programming language

flight control

range

light pulses

other long-range network

object

new location

tangible device

principles

device identifier

transaction details

transaction

flowchart or block diagrams

deliveries

correct package docking device

mechanisms

goods and schedule deliveries

substantially flat structure

schedule delivery

networks

UAV transports packages

network adapter card or network interface

FPGA

package transfer request

specified logical function(s)

[0056] Aspects

order information

opening

Internet Service Provider

computer instructions

materials

accompanying drawings

outdoor arenas

arrival

host system or service provider

locking mechanism

system

multiple parties

host system computer

thickness

module

element components

distributed environment

latter scenario

associated information

its presence

following

field-programmable gate arrays

billing and payment information

routers

related circuitry

scope and spirit

groove

location

foregoing

business

smalltalk

high level view

apartment complex

delivered packages

reverse order

correct person

length and width

first responders

punch-cards

special purpose hardware-based systems

function

means

computer readable program instructions

one or more programming languages

consumer

landing function

website

seller

electromagnetic waves

discounts

durable material

portable package docking device

floppy disk

package deliveries

plurality

instruction-set-architecture

320 determines

video recording device

high volume

affiliates

enabling package deliveries

EPROM or Flash memory

time

food

microcode

vertical scale data

integers

presence or addition

unmanned aerial vehicles (UAVs)

order process

external storage device

functions/acts

operations

locations

package docking system

network address

long-range network

pickup and delivery

manufacture

area

identifier

kind

package docking device communicates

particular manner

package transfer operations

transitory signals

GPS location

operational steps

real estate

secured device

flowchart illustrations

flow diagram

variations

apartment building rooftop

transaction record

scenarios

physical components

its memory

WI-FI components

indicated above

close proximity

its location

UAV(s)

access

its discovery

apartment tenant

mechanically encoded device

flowchart illustrations and/or block diagrams

illustration and description

electrical signals

fact

wireless networks

transport

many modifications

order

housing

presence

emergency condition

delivery location

optical storage device

electronic storage device

items

particular use

portable, wireless communications device

designated UAV

general-purpose desktop

payment and billing information

respective computing/processing device

same consumer

package docking devices

purpose

description

detection sensor

computing/processing device

architecture

corresponding package docking device

landing site

function elements

request

entity

very close proximity

flowchart illustration

package

series

few feet

capability

flowchart and block diagrams

short-range

waveguide

users

multiple geographically-dispersed locations

perspective views

special purpose computer

remote computer or server

random access memory

block diagram

individual identification

lightweight

seller entity

structure, material, or act

various components

structures

figure

turning

known networks

signal

conclusion

other devices

landing gear

response

commerce application

web browser

secured system

requests

particular item

remote work locations

machine

one or more networks

one or more mainframe computers

memory

end user consumer

indication

physical locations

steps

further embodiment

fiber-optic cable

secured structure

subject matter

network(s)

means or step

plural forms

elements

beach towel

physical engagement

other freely propagating electromagnetic waves

end user

services

departure

static random access memory

wireless transmission

order identifier

additional functions

unmanned aircraft

stand-alone software package

locking system

various techniques

other device

singular forms

weight restrictions

network

weight

activation

two package docking devices

details

corresponding structures

components

confirmation

recording package transfer operations

erasable programmable read-only memory

review order information

one or more high-speed computer processing devices

FIGS

more specific examples

tablet PC

computing/processing devices

theft or tampering

semiconductor storage device

sample user interface screen

non-exhaustive list

delivery time

package subject

control processor

weight sensor

ordinary skill

medical supplies

non-commercial entity

personal computer

various embodiments

illustrations and/or block diagrams

onboard computer systems

optical transmission fibers

densely populated city

unmanned aerial vehicle

remote computer

terms

magnetic storage device

satellite

altitude data

digital versatile disk

other claimed elements

state information

interaction

signals

corresponding fields

DETAILED DESCRIPTION

read-only memory

purposes

user interface

target package docking device

logic circuitry

consumers

block diagrams

global positioning system

its destination

recording

communication

circuitry

figures

end user’s

blocks

situations

data

instructions

short range

landing panel

similar programming languages

partial order

separate physical device

acts

two blocks

service provider

delivery UAV

variety

suitable combination

particular embodiments

one destination

confirmations

operation

computer program

short-range wireless communications

sea level

package transport services provider

discovered device

others

package transfer operation

security key

example

prompts

alternative embodiment

package delivery service provider or centralized service

short-range wireless network

applications

other features

storage device

movement

panel

figure 2.

permanent or semi-permanent fixed structures

time/date stamp

stated features

portable device

edge servers

equivalents

perspective view

package delivery provider

first scenario

packages

functions

programmable data processing apparatus

consolidated data source

delivery and/or video

code

flowchart and/or block diagram block or blocks

purchase or order

highly populated areas

package docking device(s)

e.g., upper wall or roof

implement aspects

present invention

function/act

machine instructions

device identifier and package docking device

ease

mailing address

detection

form

other confirmations

various roads or locations

one embodiment

radio signals

performing package transport services

adapter

fixed location

succession

hard disk

advantages

secure device

application

updated GPS location information

storage medium accessible

alternatively the mobile device

batch process

segment

illustration

portable computer diskette

option “Ship

locking system and security key

number

embodiment

one application

wrong package docking device

destination

upper wall

other programmable data processing apparatus

forgoing

smart phone

part

activities

cellular, satellite

invention

storage medium

alternative implementations

claims

type

other network entities

various shapes

combinations

home

Component Block Diagram of Electronic Device


Drawings

Brief Description:

Figure 1 is a block diagram of components of an electronic device 100, in accordance with aspects of the present disclosure;

Detailed Description:

An example of a suitable electronic device may include various internal and/or external components which contribute to the function of the device. Figure 1 is a block diagram illustrating the components that may be present in such an electronic device 100 and which may allow the electronic device 100 to function in accordance with the techniques discussed herein. As will be appreciated, the various functional blocks shown in Figure 1 may include hardware elements (including application specific or generic circuitry), software elements (including computer code stored on a machine-readable medium) or a combination of both hardware and software elements. It should further be noted that Figure 1 is merely one example of a particular implementation and is merely intended to illustrate the types of components that may be present in an electronic device 100. For example, in the presently illustrated embodiment, these components may include a display 104, I/O ports 116, input structure(s) 106, data processing circuitry, such as one or more processor(s) 112, memory 118, a non-volatile storage 114, expansion card(s) 108, a network device 102, and a power source 110

With regard to each of these components, the display 104 may be used to displayvarious images generated by the electronic device 100. The display 104 may be any type of display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, or other suitable display. In certain embodiments of the electronic device 100, the display 104 may include a touch-sensitive element, such as a touch screen

The I/O ports 116 may include ports configured to connect to a variety of external devices, such as a power source or other electronic devices (such as handheld devices and/or computers, printers, projectors, external displays, modems, docking stations, and so forth). For example, in some embodiments, peripheral hardware attachments such as a credit card reader, commonly referred to as a card sled, may be connected to the electronic device 100 through I/O ports 116. In some embodiments, information obtained through the credit card reader may be transmitted to a suitable processor (e.g., processorprocessor(s) 112). The I/O ports 116 may support any standard or proprietary interface type, such as a universal serial bus (USB) port, a video port, a serial connection port, an IEEE-1394 port, an ethernet or modem port, and/or an AC/DC power connection port

The input structure(s) 106 may include the various devices, circuitry, and pathways by which input or feedback is provided to data processing circuitry, such as the processor(s) 112. Such input structure(s) 106 may be configured to control a function of the electronic device 100 when actuated. For example, the input structure(s) 106 may include buttons, sliders, switches, control pads, keys, knobs, scroll wheels, keyboards, mice, touchpads, and so forth. In certain embodiments, the input structure(s) 106 may also include such components as global positioning system (GPS) circuitry and/or accelerometers that convey information about the location and/or orientation of the electronic device 100 to the processor(s) 112

In certain embodiments, an input structure(s) 106 and display 104 may be provided together, such an in the case of a touch screen where a touch sensitive mechanism is provided in conjunction with the display 104. In such embodiments, the user may select or interact with displayed interface elements via the touch sensitive mechanism. In this way, the displayed user interface may provide interactive functionality, allowing a user to select, by touch screen or other input structure, from among options displayed on the display 104

User interaction with the input structure(s) 106, such as to interact with a user or application interface displayed on the display 104, may generate electrical signals indicative of the user input. These input signals may be routed via suitable pathways, such as an input hub or bus, to data processing circuitry, such as the processor(s) 112, for further processing

The processor(s) 112 may provide data processing capability to execute the operating system, programs, user and application interfaces, and any other functions of the electronic device 100. The processor(s) 112 may include one or more microprocessors, such as one or more “general-purpose” microprocessors, one or more special-purpose microprocessors and/or ASICS, or some combination of such processing components. For example, the processor(s) 112 may include one or more reduced instruction set (RISC) processors, as well as graphics processors, video processors, audio processors and/or related chip sets

The instructions or data to be processed by the processor(s) 112 may be stored in a memory 118. The memory 118 may be provided as a volatile memory, such as random access memory (RAM), and/or as a non-volatile memory, such as read-only memory (ROM). The memory 118 may store a variety of information and may be used for various purposes. For example, the memory 118 may store firmware executed by a processor(s) 112 (such as basic input/output instructions or operating system instructions, including instructions implementing non-alphanumeric authentication (e.g., authentication not based on keys or characters found on a keyboard) as discussed herein), other programs that enable various functions of the electronic device 100, user interface functions, processor functions. In addition, the memory 118 may be used for buffering or caching during operation of the electronic device 100

The components may further include a non-volatile storage 114 for persistent storage of data and/or instructions. The non-volatile storage 114 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media. The non-volatile storage 114 may be used to store data files such as personal or business information (e.g., financial and other account information), software, wireless connection information (e.g., information that may enable the electronic device 100 to establish a wireless connection, such as a telephone or wireless network connection), and any other suitable data. In addition, the non-volatile storage 114 may also store code and/or data for implementing various functions of the electronic device 100, such as application or program code, data associated with such applications or programs, operating system code, user configured preferences, as well as code for implementing secure user authentication as discussed herein. 

The embodiment illustrated in Figure 1 may also include one or more card or expansion slots. The card slots may be configured to receive an expansion card(s) 108 that may be used to add functionality, such as additional memory, I/O functionality, or networking capability, to the electronic device 100. Such an expansion card(s) 108 may connect to the device through any type of suitable standard or proprietary connector, and may be accessed internally or external to the housing of the electronic device 100. For example, in one embodiment, the expansion card(s) 108 may be flash memory card, such as a SecureDigital (SD) card, mini- or microSD, CompactFlash card, multimedia card (MMC), or the like. 

The components depicted in Figure 1 also include a network device 102, such as a network controller or a network interface card (NIC). In one embodiment, the network device 102 may be a wireless NIC providing wireless connectivity over any 802.11 standard or any other suitable wireless networking standard. The network device 102 may allow the electronic device 100 to communicate over a network, such as a Local Area Network (LAN), Wide Area Network (WAN), cellular network, or the internet. Further, the electronic device 100 may connect to and send or receive data with any device on the network, such as portable electronic devices, personal computers, printers, and so forth. Alternatively, in some embodiments, the electronic device 100 may not include a network device 102. In such an embodiment, a NIC may be added as an expansion card(s) 108 to provide similar networking capability as described above. 

Further, the components may also include a power source 110. In one embodiment, the power source 110 may be one or more batteries, such as a lithium-ion polymer battery. The battery may be user-removable or may be secured within the housing of the electronic device 100, and may be rechargeable. Additionally, the power source 110 may include AC power, such as provided by an electrical outlet, and the electronic device 100 may be connected to the power source 110 via a power adapter. This power adapter may also be used to recharge one or more batteries if present


Parts List

100

electronic device

102

network device

104

display

106

input structure(s)

108

expansion card(s)

110

power source

112

processor(s)

114

non-volatile storage

116

I/O ports

118

memory


Terms/Definitions

electrical outlet

user interaction

keyboards

network device

organic light

buffering or caching

read-only memory

particular implementation

software

standard

related chip sets

volatile memory

graphics processors

such input structures

such an embodiment

card

instructions or data

telephone

mice

touch-sensitive element

AC/DC power connection port

proprietary interface type

interactive functionality

modems

battery

persistent storage

lithium-ion polymer battery

certain embodiments

options

e.g., processor

wireless connection information

power adapter

similar networking capability

non-volatile memory

other programs

operating system instructions

processors

one or more processors

networking capability

regard

instructions

external components

techniques

modem port

such processing components

peripheral hardware attachments

power source

cathode ray tube

display

various devices

components

merely one example

CompactFlash card

data and/or instructions

user or application interface

circuitry

such components

keyboard

mini- or microSD

ports

operation

network controller

IEEE-1394 port

additional memory

firmware

input signals

case

network interface card

solid-state storage media

projectors

operating system

printers

suitable pathways

scroll wheels

handheld devices and/or computers

wireless connection

conjunction

one or more batteries

audio processors

serial connection port

various images

embodiments

example

buttons

further processing

one or more reduced instruction set

keys

machine-readable medium

pathways

multimedia card

computer code

networking standard

processor functions

displayed user interface

aspects

input hub

business information

basic input/output instructions

functionality

I/O ports

Wide Area Network

wireless NIC providing wireless connectivity

hardware elements

docking stations

types

video port

expansion card(s)

input or feedback

touchpads

code and/or data

credit card reader

input structure(s)

proprietary connector

touch screen

one embodiment

hard drive

global positioning system

user and application interfaces

external displays

other input structure

electrical signals

suitable processor

program code

video processors

diode

non-volatile storage

various functional blocks

portable electronic devices

other functions

keys or characters

processor(s)

one or more “general-purpose” microprocessors

addition

electronic device

convey information

wireless network connection

present disclosure

software elements

variety

device

personal computers

cellular network

RISC

data processing circuitry

flash memory card

programs

other suitable display

light

internet

non-alphanumeric authentication

SecureDigital (SD) card

user

presently illustrated embodiment

Local Area Network

combination

expansion card

processor

such an electronic device

ethernet

various purposes

external devices

data processing capability

code

authentication

AC power

I/O functionality

block diagram

knobs

send

data

one or more microprocessors

user input

other electronic devices

networking device

hardware and software elements

preferences

card slots

input structure

information

present

random access memory

operating system code

memory device

touch sensitive mechanism

displayed interface elements

universal serial bus

such embodiments

memory

user interface functions

embodiment

data files

network

type

other suitable data

housing

one or more card or expansion slots

flash memory

such applications or programs

secure user authentication

sliders, switches, control pads

liquid crystal display

one or more special-purpose microprocessors and/or ASICS

function

application

location and/or orientation

various functions

such an expansion card

suitable electronic device

CNN


Drawings

Brief Description:

illustrates a convolutional neural network 100 in accordance with one embodiment.

Detailed Description:

Figure 1 illustrates an exemplary convolutional neural network 100. The convolutional neural network 100 arranges its neurons in three dimensions (width, height, depth), as visualized in convolutional layer 104. Every layer of the convolutional neural network 100 transforms a 3D volume of inputs to a 3D output volume of neuron activations. In this example, the input layer 102 encodes the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels). The convolutional layer 104 further transforms the outputs of the input layer 102, and the output layer 106 transforms the outputs of the convolutional layer 104 into one or more classifications of the image content.

Brief Description:

illustrates a convolutional neural network layers 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates an exemplary convolutional neural network layers 200 in more detail. An example subregion of the input layer region 204 of an input layer region 202 region of an image is analyzed by a set of convolutional layer subregion 208 in the convolutional layer 206. The input layer region 202 is 32×32 neurons long and wide (e.g., 32×32 pixels), and three neurons deep (e.g., three color channels per pixel). Each neuron in the convolutional layer 206 is connected only to a local region in the input layer region 202 spatially (in height and width), but to the full depth (i.e. all color channels if the input is an image). Note, there are multiple neurons (5 in this example) along the depth of the convolutional layer subregion 208 that analyzes the subregion of the input layer region 204 of the input layer region 202, in which each neuron of the convolutional layer subregion 208 may receive inputs from every neuron of the subregion of the input layer region 204

Brief Description:

illustrates a VGG net 300 in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates a popular form of a CNN known as a VGG net 300. The initial convolution layer 302 stores the raw image pixels and the final pooling layer 320 determines the class scores. Each of the intermediate convolution layers ( convolution layer 306, convolution layer 312, and convolution layer 316) and rectifier activations ( RELU layer 304, RELUlayer 308, RELUlayer 314, and RELUlayer 318) and intermediate pooling layers (pooling layer 310, pooling layer 320) along the processing path is shown as a column.

The VGG net 300 replaces the large single-layer filters of basic CNNs with multiple 3×3 sized filters in series. With a given receptive field (the effective area size of input image on which output depends), multiple stacked smaller size filters may perform better at image feature classification than a single layer with a larger filter size, because multiple non-linear layers increase the depth of the network which enables it to learn more complex features. In a VGG net 300 each pooling layer may be only 2×2.

Brief Description:

illustrates a convolution layer filtering 400 in accordance with one embodiment.

Detailed Description:

Figure 4 illustrates a convolution layer filtering 400 that connects the outputs from groups of neurons in a convolution layer 402 to neurons in a next layer 406. A receptive field is defined for the convolution layer 402, in this example sets of 5×5 neurons. The collective outputs of each neuron the receptive field are weighted and mapped to a single neuron in the next layer 406. This weighted mapping is referred to as the filter 404 for the convolution layer 402 (or sometimes referred to as the kernel of the convolution layer 402). The filter 404 depth is not illustrated in this example (i.e., the filter 404 is actually a cubic volume of neurons in the convolution layer 402, not a square as illustrated). Thus what is shown is a “slice” of the full filter 404. The filter 404 is slid, or convolved, around the input image, each time mapping to a different neuron in the next layer 406. For example Figure 4 shows how the filter 404 is stepped to the right by 1 unit (the “stride”), creating a slightly offset receptive field from the top one, and mapping its output to the next neuron in the next layer 406. The stride can be and often is other numbers besides one, with larger strides reducing the overlaps in the receptive fields, and hence further reducing the size of the next layer 406. Every unique receptive field in the convolution layer 402 that can be defined in this stepwise manner maps to a different neuron in the next layer 406. Thus, if the convolution layer 402 is 32x32x3 neurons per slice, the next layer 406 need only be 28x28x1 neurons to cover all the receptive fields of the convolution layer 402. This is referred to as an activation map or feature map. There is thus a reduction in layer complexity from the filtering. There are 784 different ways that a 5 x 5 filter can uniquely fit on a 32 x 32 convolution layer 402, so the next layer 406 need only be 28 x 28. The depth of the convolution layer 402 is also reduced from 3 to 1 in the next layer 406.

 The number of total layers to use in a CNN, the number of convolution layers, the filter sizes, and the values for strides at each layer are examples of “hyperparameters” of the CNN. 

Brief Description:

illustrates a pooling layer function 500 in accordance with one embodiment.

Detailed Description:

Figure 5 illustrates a pooling layer function 500 with a 2×2 receptive field and a stride of two. The pooling layer function 500 is an example of the maxpool pooling technique. The outputs of all the neurons in a particular receptive field of the input layer 502 are replaced by the maximum valued one of those outputs in the pooling layer 504. Other options for pooling layers are average pooling and L2-norm pooling. The reason to use a pooling layer is that once a specific feature is recognized in the original input volume (there will be a high activation value), its exact location is not as important as its relative location to the other features. Pooling layers can drastically reduce the spatial dimension of the input layer 502 from that pont forward in the neural network (the length and the width change but not the depth). This serves two main purposes. The first is that the amount of parameters or weights is greatly reduced thus lessening the computation cost. The second is that it will control overfitting. Overfitting refers to when a model is so tuned to the training examples that it is not able to generalize well when applied to live data sets.


Parts List

100

convolutional neural network

102

input layer

104

convolutional layer

106

output layer

200

convolutional neural network layers

202

input layer region

204

subregion of the input layer region

206

convolutional layer

208

convolutional layer subregion

300

VGG net

302

convolution layer

304

RELU layer

306

convolution layer

308

RELUlayer

310

pooling layer

312

convolution layer

314

RELUlayer

316

convolution layer

318

RELUlayer

320

pooling layer

400

convolution layer filtering

402

convolution layer

404

filter

406

next layer

500

pooling layer function

502

input layer

504

pooling layer


Terms/Definitions

Network Operating Environment


Drawings

Brief Description:

Figure 1  is a block diagram of an exemplary network operating environment for mobile devices

Detailed Description:

Exemplary Operating Environment 

Figure 1 is a block diagram of an exemplary network operating environment 100 for the mobile devices of Figure 1 through figure 15. Mobile device(s) a 112 and mobile device(s) b 114 can, for example, communicate over one or more wired and/or wireless network(s) 102 in data communication. For example, a wireless network 110, e.g., a cellular network, can communicate with a wide area network 104 (WAN), such as the internet, by use of a gateway 108. Likewise, an access device 106, such as an 802.11g wireless access point, can provide communication access to the wide area network 104

In some implementations, both voice and data communications can be established over wireless network 110 and the access device 106. For example, mobile device(s) a 112 can place and receive phone calls (e.g., using voice over Internet Protocol (VoIP) protocols), send and receive e-mail messages (e.g., using Post Office Protocol 3 (POP3)), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over wireless network 110, gateway 108, and wide area network 104 (e.g., using Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol (UDP)). Likewise, in some implementations, the mobile device(s) b 114 can place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over the access device 106 and the wide area network 104. In some implementations, mobile device(s) a 112 or mobile device(s) b 114 can be physically connected to the access device 106 using one or more cables and the access device 106 can be a personal computer. In this configuration, mobile device(s) a 112 or mobile device(s) b 114 can be referred to as a “tethered” device. 

Mobile device(s) a 112 and mobile device(s) b 114 can also establish communications by other means. For example, wireless device 1802a can communicate with other wireless devices, e.g., other mobile devices, cell phones, etc., over the wireless network 110. Likewise, mobile device(s) a 112 and mobile device(s) b 114 can establish peer-to-peer communications 116, e.g., a personal area network, by use of one or more communication subsystems, such as the Bluetooth.TM. communication devices. Other communication protocols and topologies can also be implemented. 

The mobile device(s) a 112 or mobile device(s) b 114 can, for example, communicate with one or more services, location service(s) 118 and map service 120 over the one or more wired and/or wireless networks. For example, one or more location service(s) 118 can conduct surveys of venues, generate location fingerprint data for each venue, and provide the location fingerprint data to mobile device(s) a 112 or mobile device(s) b 114. Map service 120 can, for example, provide maps of venues, e.g., maps of structures of buildings to mobile device(s) a 112 or mobile device(s) b 114

Mobile device(s) a 112 or mobile device(s) b 114 can also access other data and content over the one or more wired and/or wireless networks. For example, content publishers, such as news sites, Really Simple Syndication (RSS) feeds, web sites, blogs, social networking sites, developer networks, etc., can be accessed by mobile device(s) a 112 or mobile device(s) b 114. Such access can be provided by invocation of a web browsing function or application (e.g., a browser) in response to a user touching, for example, a web object

A number of implementations of the invention have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the invention


Parts List

100

network operating environment

102

wired and/or wireless network(s)

104

wide area network

106

access device

108

gateway

110

wireless network

112

mobile device(s) a

114

mobile device(s) b

116

118

location service(s)

120

map service


Terms/Definitions

web sites

internet

surveys

web object

cellular network

communications

venues

personal computer

voice and data communications

various modifications

topologies

FIGS

Simple Syndication

invention

spirit and scope

provide maps

location service(s)

communication

access device

phone calls

such access

web pages

VoIP

data communication

photographs

blogs

buildings

user touching

social networking sites

wireless network

content publishers

Post Office Protocol

response

configuration

communication access

personal area network

Internet Protocol

block diagram

e-mail messages

Transmission Control Protocol/Internet Protocol

web browsing function or application

wired and/or wireless network(s)

news sites

videos

wide area network (WAN)

location fingerprint data

implementations

venue

peer-to-peer communications

maps

example

mobile device(s) b

User Datagram Protocol

developer networks

structures

number

invocation

mobile device(s) a

wide area network

browser

wireless device

communicate

gateway

voice

electronic documents and/or streams

network operating environment

map service

electronic documents

CNN Object Detection Models


Drawings

Brief Description:

illustrates a comparison between image classification, object detection, and instance segmentation.

Detailed Description:

Figure 1 illustrates a comparison between image classification, object detection, and instance segmentation. When a single object is in an image, the classification model 102 maybe utilized to identify what is in the image. For instance, the classification model 102 identifies that a cat is in the image. In addition to the classification model 102, a classification and localization model 104 may be utilized to classify and identify the location of the cat within the image with a bounding box 106. When multiple objects are present within an image, an object detection model 108 may be utilized. The object detection model 108 utilizes bounding boxes to classify and locate the position of the different objects within the image. An instance segmentation model 110 detects each object of an image, its localization and its precise segmentation by pixel with a segmentation region 112.

The Image classification models classify images into a single category, usually corresponding to the most salient object. Photos and videos are usually complex and contain multiple objects. This being said, assigning a label with image classification models may become tricky and uncertain. Object detection models are therefore more appropriate to identify multiple relevant objects in a single image. The second significant advantage of object detection models versus image classification ones is that localization of the objects may be provided.

Some of the model that may be utilized to perform image classification, object detection, and instance segmentation include but are not limited to, Region-based Convolutional Network (R-CNN), Fast Region-based Convolutional Network (Fast R-CNN), Faster Region-based Convolutional Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), You Only Look Once (YOLO), Single-Shot Detector (SSD), Neural Architecture Search Net (NASNet), and Mask Region-based Convolutional Network (Mask R-CNN).

These models may utilize a variety of training datasets that include but are not limtied to PASCAL Visual Object Classification (PASCAL VOC) and Common Objects in COntext (COCO) datasets.

The PASCAL Visual Object Classification (PASCAL VOC) dataset is a well-known dataset for object detection, classification, segmentation of objects and so on. There are around 10 000 images for training and validation containing bounding boxes with objects. Although, the PASCAL VOC dataset contains only 20 categories, it is still considered as a reference dataset in the object detection problem.

ImageNet has released an object detection dataset since 2013 with bounding boxes. The training dataset is composed of around 500 000 images only for training and 200 categories.

The Common Objects in COntext (COCO) datasets were developed by Microsoft. This dataset is used for caption generation, object detection, key point detection and object segmentation. The COCO object detection consists in localizing the objects in an image with bounding boxes and categorizing each one of them between 80 categories.

Brief Description:

illustrates a Region-based Convolution Network 200.

Detailed Description:

Figure 2 illustrates an example of a Region-based Convolution Network 200 (R-CNN). Each region proposal feeds a convolutional neural network (CNN) to extract a features vector, possible objects are detected using multiple SVM classifiers and a linear regressor modifies the coordinates of the bounding box. The regions of interest (ROI 202) of the input image 204. Each ROI 202 of  resized/warped creating the warped image region 206 which are forwarded to the convolutional neural network 208 where they are feed to the support vector machines 212 and bounding box linear regressors 210.

In R-CNN, the selective search method is an alternative to exhaustive search in an image to capture object location. It initializes small regions in an image and merges them with a hierarchical grouping. Thus the final group is a box containing the entire image. The detected regions are merged according to a variety of color spaces and similarity metrics. The output is a few number of region proposals which could contain an object by merging small regions.

The R-CNN model combines the selective search method to detect region proposals and deep learning to find out the object in these regions. Each region proposal is resized to match the input of a CNN from which the method extracts a 4096-dimension vector of features. The features vector is fed into multiple classifiers to produce probabilities to belong to each class. Each one of these classes has a support vector machines 212 (SVM) classifier trained to infer a probability to detect this object for a given vector of features. This vector also feeds a linear regressor to adapt the shapes of the bounding box for a region proposal and thus reduce localization errors.

The CNN model described is trained on the ImageNet dataset. It is fine-tuned using the region proposals corresponding to an IoU greater than 0.5 with the ground-truth boxes. Two versions are produced, one version is using the PASCAL VOC dataset and the other the ImageNet dataset with bounding boxes. The SVM classifiers are also trained for each class of each dataset.

Brief Description:

illustrates a Fast Region-based Convolutional Network 300.

Detailed Description:

Figure 3 illustrates an example of a Fast Region-based Convolutional Network 300 (Fast R-CNN). The entire image (input image 306) feeds a CNN model (convolutional neural network 302) to detect RoI (ROI 304) on the feature maps 310. Each region is separated using a RoI pooling layer (ROI pooling layer 308) and it feeds fully connected layers 312. This vector is used by a softmax classifier 314 to detect the object and by a bounding box linear regressors 316 to modify the coordinates of the bounding box. The purpose of the Fast R-CNN is to reduce the time consumption related to the high number of models necessary to analyse all region proposals.

A main CNN with multiple convolutional layers is taking the entire image as input instead of using a CNN for each region proposals (R-CNN). Region of Interests (RoIs) are detected with the selective search method applied on the produced feature maps. Formally, the feature maps size is reduced using a RoI pooling layer to get valid Region of Interests with fixed height and width as hyperparameters. Each RoI layer feeds fully-connected layers creating a features vector. The vector is used to predict the observed object with a softmax classifier and to adapt bounding box localizations with a linear regressor.

Brief Description:

illustrates a Faster Region-based Convolutional Network 400.

Detailed Description:

Figure 4 illustrates an example of a Faster Region-based Convolutional Network 400 (Faster R-CNN).

Region proposals detected with the selective search method were still necessary in the previous model, which is computationally expensive.  Region Proposal Network (RPN) was introduced to directly generate region proposals, predict bounding boxes and detect objects. The Faster R-CNN is a combination between the RPN and the Fast R-CNN model.

A CNN model takes as input the entire image and produces feature map 410. A window of size 3×3 (sliding window 402) slides all the feature maps and outputs a features vector (intermediate layer 404) linked to two fully-connected layers, one for box-regression and one for box-classification. Multiple region proposals are predicted by the fully-connected layers. A maximum of k regions is fixed thus the output of the box regression layer 408has a size of 4k (coordinates of the boxes, their height and width) and the output of the box classification layer 406 a size of 2k (“objectness” scores to detect an object or not in the box). The k region proposals detected by the sliding window are called anchors.

When the anchor boxes 412 are detected, they are selected by applying a threshold over the “objectness” score to keep only the relevant boxes. These anchor boxes and the feature maps computed by the initial CNN model feeds a Fast R-CNN model.

The entire image feeds a CNN model to produce anchor boxes as region proposals with a confidence to contain an object. A Fast R-CNN is used taking as inputs the feature maps and the region proposals. For each box, it produces probabilities to detect each object and correction over the location of the box.

Faster R-CNN uses RPN to avoid the selective search method, it accelerates the training and testing processes, and improve the performances. The RPN uses a pre-trained model over the ImageNet dataset for classification and it is fine-tuned on the PASCAL VOC dataset. Then the generated region proposals with anchor boxes are used to train the Fast R-CNN. This process is iterative.


Parts List

102

classification model

104

classification and localization model

106

bounding box

108

object detection model

110

instance segmentation model

112

segmentation region

200

Region-based Convolution Network

202

ROI

204

input image

206

warped image region

208

convolutional neural network

210

bounding box linear regressors

212

support vector machines

300

Fast Region-based Convolutional Network

302

convolutional neural network

304

ROI

306

input image

308

ROI pooling layer

310

feature maps

312

fully connected layers

314

softmax classifier

316

bounding box linear regressors

400

Faster Region-based Convolutional Network

402

sliding window

404

intermediate layer

406

box classification layer

408

box regression layer

410

feature map

412

anchor boxes


Terms/Definitions