Client: Apple

Handheld Device Schematic


Drawings

Brief Description:

Figure 1 is a schematic front view of a handheld device 100 representing one embodiment

Detailed Description:

The handheld device 100 of Figure 1 and figure 3 may represent, for example, a cellular phone, a portable phone, a media player, a personal data organizer, a handheld game platform, a tablet computer, a notebook computer, or any combination of such devices. By way of example, the handheld device 100 may be a model of an iPad®, iPod®, iPhone®, or macbook® available from Apple Inc. of cupertino, califFigure 1 depicts the front of handheld device 100, while Figure 2 depicts the back of handheld device 100.

The handheld device 100 may include an enclosure 116 to protect interior components from physical damage and to shield them from electromagnetic interference. The enclosure 116 may include window a 108 and window b 112 configured to conceal components such as an image capture device 110 and biometric sensor 114, respectively. By concealing the image capture device 110 and the biometric sensor 114 behind the enclosure 116, these components may remain unseen when not in use. For example, when the image capture device 110 and the biometric sensor 114 are not in use, they may be concealed by selectively causing the window a 108 and window b 112 to be opaque, or “closed.” Since the window a 108 and window b 112 may be color-matched so as to be indistinguishable from the enclosure 116, the enclosure 116 may appear seamless when the window a 108 and window b 112 are closed. When a concealed component is to be in use, such as image capture device 110 and/or biometric sensor 114, they may be exposed from beneath the enclosure 116 by selectively causing the window a 108 and/or window b 112 to become transparent, or “open.” components such as the image capture device 110 and the biometric sensor 114 may be exposed for as long as desired.

In some embodiments, components of the handheld device 100, such as the image capture device 110 and the biometric sensor 114, may be selectively exposed when certain component-using features of the handheld device 100 are activated. By way of example, an image capture feature of the handheld device 100, which may employ the image capture device 110, may become activated when a user elects to run a camera application selectable via a graphical user interface (GUI 106). In general, the GUI 106 may include one or more icon(s) 120 for providing access to features of the handheld device 100 (e.g., applications, features of an operating system of the handheld device 100, features of firmware of the handheld device 100, and so forth). At times during the use of such features, the features may utilize components of the handheld device 100that may be hidden behind a window a 108, window b 112, window c 202, or window c 204 (e.g., the image capture device 110hidden behind the window a 108 or the biometric sensor 114 hidden behind the window a 108 or window b 112B). Thus, in some embodiments, when the handheld device 100 detects that a feature (e.g., a camera application) that is expected to use a hidden component (e.g., the image capture device 110) has been selected via the GUI 106, the window controller 22 of Figure 1 may open the associated window a 108, window b 112, window c 202, or window c 204 (e.g., the window a 108). When the handheld device 100 detects that the utilization of the component (e.g., the image capture device 110) is no longer desired by the feature of the handheld device 100(e.g., the camera application is closed), the window controller 22 may close the window a 108, window b 112, window c 202, or window c 204, hiding the component.


Brief Description:

Figure 2 is a schematic backview of the handheld device 100 illustrated in Figure 1

Detailed Description:

The technique of exposing concealed components is not limited to dynamically changing window a 108, window b 112, window c 202, or window d 204 opacity upon the launch of applications within the electronic device 10. For example, as illustrated in Figure 1, the back of the handheld device 100 may have two windows window c 202 and window d 204 disposed above an image capture device 110 and a strobe 206, respectively. Other embodiments may include more or fewer windows and corresponding concealed components. Initially, the windows window c 202 and window d 204 may conceal the image capture device 110 and the strobe 206. In one embodiment, the window c 202 disposed above the image capture device 110 may be opened by the window controller 22 upon selection of the icon(s) 120 of Figure 1 linking to the camera application. The window d 204 disposed above the LED strobe 206 may remain closed until the camera application determines that increased illumination is desired. Upon such a determination, the camera application may provide some indication to the window controller 22 that the window d 204 disposed above the LED strobe 206 should be opened. The window controller 22 may “open” the window d 204 disposed above the LED strobe 206 by making the window d 204 transparent, exposing the LED strobe 206 for use. Upon determining that the strobe 206 is no longer desired for use, the camera application may provide some indication to the window controller 22 that the window d 204 should be closed. The window controller 22 then may cause the window d 204 disposed above the LED strobe 206 to “close,” becoming opaque and hiding the LED strobe 206. Upon completion of the use of the image capture device 110, the window controller 22 may also close the window c 202 disposed above the image capture device 110, causing the image capture device 110 to disappear into the enclosure 116.

In some embodiments, even the display 118 of an electronic device 10 may be concealed. For example, FIGS. 16A and B illustrate a handheld device 100 having a window a 108, window b 112, window c 202, or window d 204 disposed above a display 118. As shown in figure 16A, when the display 118 is not in use, the window a 108, window b 112, window c 202, or window d 204 may remain closed, hiding the display 118 and giving the appearance of a single seamless enclosure without a display 118. When the display 118 is activated, the window a 108, window b 112, window c 202, or window d 204 may be opened, exposing the display 118, as shown in figure 16B. By way of example, the display 118 may be activated when a user selects certain of the input structures 104 of the handheld device 100.

Window a 108, window b 112, window c 202, or window d 204 may conceal components in the enclosure 116 and/or, when the display 118 is transparent (e.g., a transparent OLED display), under the display 118 of the electronic device 10. 


Parts List

100

handheld device

102

opening loop block

104

input structures

106

GUI

108

window a

110

image capture device

112

window b

114

biometric sensor

116

enclosure

118

display

120

icon(s)

202

window c

204

window d

206

strobe


Terms/Definitions

concealed component

window d

biometric sensor

suitable component

personal data organizer

touch layer

example

window(s)

window controller

cupertino

touch inputs

electronic display system

transparent material

printing layers

electromagnetic interference

enclosure

selection

indication

several printing layers

ambient light layer

infrared layer

icon(s)

transparent OLED display

underlying layers

protection

image capture device

camera application

electronic device

opaque

features

calif

schematic front view

other embodiments

handheld device

glass or plastic

Apple Inc

opacity

image capture feature

technique

input structures

black enclosure system

embodiments

handheld game platform

strobe

window c

more or fewer windows

figure

interior components

current level

increased illumination

window b

protective cover layer

even the display

such devices

GUI

view

color

launch

display cutouts

immediate environment

macbook®

operating system

media player

hidden component

exposure

black color layer

access

front

user elects

tablet computer

layer

completion

single seamless enclosure

suitable embodiment

window cutouts

utilization

display

lower layers

capacitive touch layer

device

notebook computer

portable phone

component

cellular phone

infrared radiation

physical damage

enclosure system

concealed components

other layers

window a

applications

appearance

color layer

components

associated window

visible light

combination

two windows

user

certain component-using features

back

feature

wear

certain components

layers

times

model

firmware

such a determination

windows

such features

Computing Device and Computing Environment


Drawings

Brief Description:

illustrates a general-purpose computing device 100 embodiment.

Detailed Description:

With reference to Figure 1, an exemplary system includes a general-purpose computing device 100, including a processing unit (CPU or processor 110) and a system bus 126 that couplesvarious system components including the system memory 112 such as read only memory (ROM 114) and random access memory (RAM 116) to the processor 110. The general-purpose computing device 100 can include a cache 108 of high speed memory connected directly with, in close proximity to, or integrated as part of the processor 110. The general-purpose computing device 100 copies data from the system memory 112 and/or the storage device 118 to the cache 108 for quick access by the processor 110. In this way, the cache 108 provides a performance boost that avoids processor 110delays while waiting for data. These and other modules can control or be configured to control the processor 110 to perform various actions. Other system memory 112 may be available for use as well. The system memory 112 can include multiple different types of memory with different performance characteristics. It can be appreciated that the disclosure may operate on a general-purpose computing device 100 with more than one processor 110 or on a group or cluster of computing devices networked together to provide greater processing capability. The processor 110 can include any general purpose processor and a hardware module or software module, such as module 1 (mod1 120), module 2 (mod2 122), and module 3 (mod3 124) stored in storage device 118, configured to control the processor 110 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 110 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. 

The system bus 126 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. A basic input/output (BIOS) stored in ROM 114 or the like, may provide the basic routine that helps to transfer information between elements within the general-purpose computing device 100, such as during start-up. The general-purpose computing device 100 further includes a storage device 118 such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 118 can include software modulesmod1 120, mod2 122, and mod3 124 for controlling the processor 110. Other hardware or software modules are contemplated. The storage device 118 is connected to the system bus 126 by a drive interface. The drives and the associated computer readable storage media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the general-purpose computing device 100. In one aspect, a hardware module that performs a particular function includes the software component stored in a non-transitory computer-readable medium in connection with the necessary hardware components, such as the processor 110, system bus 126, output device 104, and so forth, to carry out the function. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the general-purpose computing device 100 is a small, handheld computing device, a desktop computer, or a computer server

Although the exemplary embodiment described herein employs a storage device 118, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAM 116), read only memory (ROM 114), a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Non-transitory computer-readable storage media expressly excludemedia such as energy, carrier signals, electromagnetic waves, and signals per se. 

To enable user interaction with the general-purpose computing device 100, an input device 102 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 104 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the general-purpose computing device 100. The communications interface 106 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. 

For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a “processor” or processor 110. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 110, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example the functions of one or more processors presented in Figure 1 may be provided by a single shared processor or multiple processors. (Use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM 114) for storing software performing the operations discussed below, and random access memory (RAM 116) for storing results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided. 

The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-useprogrammable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The general-purpose computing device 100 shown in Figure 1 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media. Such logical operations can be implemented as modules configured to control the processor 110 to perform particular functions according to the programming of the module. For example, Figure 1 illustrates three modulesmod1 120, mod2 122 and mod3 124 which are modules configured to control the processor 110. These modules may be stored on the storage device 118 and loaded into RAM 116 or system memory 112 at runtime or may be stored as would be known in the art in other computer-readable memory locations.

Brief Description:

illustrates a computing environment 200 in accordance with one embodiment.

Detailed Description:

Having disclosed some components of a computing system, the disclosure now turns to Figure 2, which illustrates a general purpose mobile computing environment 200. A communication network 206 connects the devices and applications hosted in the computing environment 200. In this computing environment 200, different devices may communicate with and send commands to each other in various ways. The application server 204, for example, may function as an intermediary between two or more user devices such as, user station 202, mobile device(s) a 214, and mobile device(s) b 210. The application server 204 may pass messages sent from one userdevice to another. For example, the application server 204 may receive a request from mobile device(s) a 214 (the “requesting device”) to locate another devicemobile device(s) b 210 (the “requested device”). In response to such a request (preferably after appropriate authentication and authorization steps have been taken to ensure the request is authorized by the user of the requested device), the application server 204 may send a request to the requested devicemobile device(s) b 210 and receive a response containing information relating to its location. The requested devicemobile device(s) b 210 may have obtained this location information based on signals it received from, for example, GPS satellites 216. Having received a response, the application server 204 may then send the information to the requesting mobile device(s) a 214. Alternatively, the application server 204 does not send a request to the requested devicemobile device(s) b 210 because it has recent location information relating to the requested devicemobile device(s) b 210 cached. In such an embodiment, the application server 204 may respond to a request by sending cached location information to the requesting mobile device(s) a 214 without communicating with the requested devicemobile device(s) b 210

The devicesuser station 202, mobile device(s) a 214, and mobile device(s) b 210 preferably have one or more location aware applications that may run on them. Of these applications, some may have the functionality to send requests to other user devices to enable a requesting user to locate a friend’sdevice. Upon receiving authorization to locate, a requesting device may then be able to send location requests to requested devices and receive responses containing the location of the requested device. Authorization is preferably managed at the server level, but may also be managed at the device level in addition or as an alternative

Referring back to Figure 2, the communication network 206 can be any type of network, including a local area network (“LAN”), such as an intranet, a wide area network (“WAN”), such as the internet, or any combination thereof. Further, the communication network 206 can be a public network, a private network, or a combination thereof. The communication network can also be implemented using any type or types of physical media, including wired communication paths and wireless communication paths associated with one or more service providers. Additionally, the communication network 206 can be configured to support the transmission of messages formatted using a variety of protocols

A device such as a user station 202 may also be configured to operate in the computing environment 200. The user station 202 can be any general-purpose computing device that can be configured to communicate with a web-enabled application, such as through a web browser. For example, the user station 202 can be a personal computing device such as a desktop or workstation, or a portable computing device, such as a laptop a smart phone, or a post-pc device. The user station 202 can include some or all of the features, components, and peripherals of general-purpose computing device 100 of Figure 1

User station 202 can further include a network connection to the communication network 206. The network connection can be implemented through a wired or wireless interface, and can support bi-directional communication between the user station 202 and one or more other computing devices over the communication network 206. Also, the user station 202 may include an interface application, such as a web browser or custom application, for communicating with a web-enabled application

An application server 204 can also be configured to operate in the computing environment 200. The application server 204 can be any computing device that can be configured to host one or more applications. For example, the application server 204 can be a server, a workstation, or a personal computer. In some implementations, the application server 204 can be configured as a collection of computing devices, e.g., servers, sited in one or more locations. The application server 204 can include some or all of the features, components, and peripherals of general-purpose computing device 100 of Figure 1

The application server 204 can also include a network connection to the communication network 206. The network connection can be implemented through a wired or wireless interface, and can support bi-directional communication between the application server 204 and one or more other computing devices over the communication network 206. Further, the application server 204 can be configured to host one or more applications. For example, the application server 204 can be configured to host a remote management application that facilitates communication with one or more mobile devices connected with the communication network 206. The mobile device(s) a 214, mobile device(s) b 210 and the application server 204 can operate within a remote management framework to execute remote management functions. The application server 204 can be configured to host a notification service application configured to support bi-directional communication over the communication network 206 between multiple communication devices included in the computing environment 200. For example, the notification service application can permit a variety of messages to be transmitted and received by multiple computing devices.

In some implementations, the notification service can include a defined namespace, in which a unique command collection topic can be created for each subscribing mobile device. A unique identifier can be used to associate a subscribing mobile device with the corresponding command collection topic, such as an assigned number or address. The unique identifier also can be embedded in a Uniform Resource Identifier (URI) that is associated with a subscribed command collection topic. Further, one or more command nodes can be created below a command collection topic, such that each command node corresponds to a particular remote command type. For example, a command collection topic can include a separate command node for a location command

Through the use of separate command nodes, multiple commands can be transmitted to one or more mobile devices substantially simultaneously. In some implementations, if multiple commands are received in a command collection topic, server time stamps can be compared to determine an order of execution

Through the notification service, a publisher, such as a remote management application, can publish a remote command message to a command collection topic that is associated with a particular mobile device. When a remote command message is published to the command collection topic, a notification message can be transmitted to the one or more subscribing mobile devices. The mobile device can then access the subscribed topic and retrieve one or more published messages. This communication between the publisher and the mobile device can be decoupled. Further, the remote command message can be published to the appropriate command node of the command collection topic. Additionally, a mobile device receiving a remote command message can publish a response to a result topic hosted by a notification service. A publisher such as a remote management application, can subscribe to the result topic and can receive any published response messages

Further, the computing environment 200 can include one or more mobile devices, such as mobile device(s) a 214 and mobile device(s) b 210. These mobile devices are preferably smart phones such as an Apple iPhone.RTM. or post-pc device such as an Apple iPad.RTM.. Each of the mobile devices included in the computing environment 200 can include a network interface configured to establish a connection to the communication network 206. For example, mobile device(s) a 214 can establish a cellular (e.g., GSM, EDGE, 3G, or 4G) network connection that provides data access to the communication network 206. Such a connection may be facilitated by one or more cellular towers 208 located within the range of the mobile device(s) a 214 and mobile device(s) b 210 and connected to the communication network 206. Further, mobile device(s) b 210 can establish an IEEE 802.11 (i.e., WiFi or WLAN) network connection to the communication network 206. Such a connection may be facilitated by one or more wireless network router(s) 212 located within the range of the mobile device(s) a 214 and mobile device(s) b 210 and connected to the communication network 206. Also, either one of these mobile device(s) a 214, mobile device(s) b 210 or an additional device may connect to the communication network 206 through the IEEE 802.16 (i.e., wireless broadband or WiBB) standard. Again, the mobile device(s) a 214, mobile device(s) b 210 may employ the assistance of a cellular towers 208 or wireless network router(s) 212 to connect to the communication network 206

Each of the mobile device(s) a 214 and mobile device(s) b 210 also can be configured to communicate with the notification service application hosted by the application server 204 to publish and receive messages. Further, each of the mobile device(s) a 214 and mobile device(s) b 210 can be configured to execute a remote management application or a remote management function responsive to a remote command received through the notification service application. In some embodiments, the remote management application can be integrated with the operating system of the mobile device

A mobile device can execute a remote command to perform one or more associated functions. For example the remote commands can include locate commands, notification commands, and message commands. A message command can be used to present a text-based message on the display of a mobile device. A locate command can be used to cause a mobile device to transmit a message indicating its location at the time the locate command is executed. The locate command may also command the mobile device to use certain resources, such as an embedded GPS system, to determine its location

Additionally, each of the mobile device(s) a 214 and mobile device(s) b 210 can include an input interface, through which one or more inputs can be received. For example, the input interface can include one or more of a keyboard, a mouse, as joystick, a trackball, a touch pad, a keypad, a touch screen, a scroll wheel, general and special purpose buttons, a stylus, a video camera, and a microphone. Each of the mobile device(s) a 214 and mobile device(s) b 210 can also include an output interface through which output can be presented, including one or more displays, one or more speakers, and a haptic interface. Further, a location interface, such as a Global Positioning System (GPS) processor, also can be included in one or more of the mobile device(s) a 214 and mobile device(s) b 210 to receive and process signals sent from GPS satellites 216 for obtaining location information, e.g., an indication of current location. In some implementations, general or special purpose processors included in one or more of the mobile device(s) a 214 and mobile device(s) b 210 can be configured to perform location estimation, such as through base station triangulation or through recognizing stationary geographic objects through a video interface

Having disclosed some basic system components and concepts, the disclosure now turns to exemplary method embodiments 300a and 300b shown in FIGS. 3a and 3b respectively. For the sake of clarity, the methods are discussed in terms of a general-purpose computing device 100 as shown in Figure 1 configured to practice the methods and operating environment shown in Figure 2. The steps outlined herein are exemplary and can be implemented in any combination thereof, including combinations that exclude, add, or modify certain steps


Parts List

100

general-purpose computing device

102

input device

104

output device

106

communications interface

108

cache

110

processor

112

system memory

114

ROM

116

RAM

118

storage device

120

mod1

122

mod2

124

mod3

126

system bus

200

computing environment

202

user station

204

application server

206

communication network

208

cellular towers

210

mobile device(s) b

212

wireless network router(s)

214

mobile device(s) a

216

GPS satellites


Terms/Definitions

hardware module

physical media

hard disk drive

joystick

Apple iPad

delays

three modules Mod

friend’s

time

host one or more applications

basic input/output

such a connection

skill

such logical operations

speech

requesting user

haptic interface

response

location information

sake

server

optical disk drive

wired communication paths

portable computing device

CPU or processor)

small, handheld computing device

mouse

carrier signals

network interface

read only memory (ROM)

various actions

locate command

particular mobile device

nodes

example

software instructions

general-purpose computing device

cellular towers

messages

microphone

mod2

alternative

notification message

reference

module

sequence

connection

server time stamps

procedures

touch screen

output mechanisms

elements

processing unit

protocols

particular hardware arrangement

other system memory

equivalent

particular remote command type

special purpose buttons

improved hardware

remote management function responsive

wired or wireless interface

random access memories (RAMs)

input interface

drives

very large scale integration

keypad

requested device

internet

desktop computer

software and hardware

command collection topic

special-purpose processor

storage devices

other computer-readable memory locations

flash memory cards

such a request

process signals

such an embodiment

drive interface

appropriate variations

GPS satellites

wide area network

microprocessor

unique command collection topic

term “processor”

laptop

quick access

operations

mod1

execution

responses

blocks

number

instances

mobile device(s) a

location requests

devices and applications

memory

random access memory (RAM)

“requesting device”

message command

computing device

computer server

only memory (ROM)

steps

touch pad

bi-directional communication

preferably smart phones

restriction

requested devices

touch-sensitive screen

individual functional blocks

EDGE

recent location information

cartridges

remote command

personal computer

multiple cores or processors

ROM

gesture or graphical input

web browser

tape drive

data access

non-transitory computer-readable medium

public network

necessary hardware components

signal

FIGS

high speed memory

location

result topic

other user devices

Apple iPhone.RTM

base station triangulation

combinations

information

remote management framework

various embodiments

basic routine

interface application

close proximity

published response messages

group or cluster

server level

system memory

computing environment

custom VLSI circuitry

basic features

system

location estimation

display

computer

firmware arrangements

post-pc device

input mechanisms

web browser or custom application

remote commands

network connection

digital versatile disks

special purpose processors

recited systems

other modules

notification service

dedicated hardware

order

cache

digital signal processor

bus structures

intermediary

software component

wireless communication paths

cable

performance boost

multiple different types

multiple types

disclosure

hardware module or software module

“requested device”

more than one processor

subscribed command collection topic

assistance

signals

couples

programmable circuits

wireless broadband or WiBB

programming

message

read-only memory (ROM)

components

results

keyboard

functional blocks

servers

energy

memory controller

separate command nodes

interconnected machine modules

subscribed topic

various system components

type or types

storage media

illustrative system embodiment

remote command message

stylus

non-transitory computer-readable storage media

nonvolatile storage

terms

stationary geographic objects

program engines

general purpose processor

certain steps

other data

magnetic cassettes

authorization

multiple processors

WiFi or WLAN

commands

assigned number or address

Global Positioning System

local area network

bit stream

concepts

indication

wireless network router(s)

data

multiple commands

personal computing device

notification commands

software modules

programmable circuit

part

device level

remote management application

trackball

separate command node

certain resources

locate commands

explanation

general use computer

applications

output

greater processing capability

different devices

web-enabled application

workstation

exclude

range

application server

subscribing mobile device

magnetic disk drive

message commands

program modules

addition

transmission

methods

communication

remote management functions

combination

mod3

BIOS

multimodal systems

communications interface

video interface

requests

appropriate command node

hardware capable

recited methods

location interface

features

additional device

authorization steps

two or more user devices

scroll wheel

defined namespace

intranet

user interaction

smart phone

functions

storage device

embodiments

variety

clarity

other types

embedded GPS system

functionality

specific-use

“processor” or processor

device

text-based message

computer implemented steps

data structures

motion

different performance characteristics

user station

circuit

media

computing devices

operating system

command node

corresponding command collection topic

peripherals

computing system

desktop or workstation

particular function

its location

single shared processor

multi-core processor

publisher

runtime

memory bus or memory controller

unique identifier

user

Uniform Resource Identifier

software

illustrative embodiments

output interface

network

function

requesting device

instructions

location command

collection

multiple communication devices

appropriate authentication

communication network

particular functions

recited non-transitory computer-readable storage media

video camera

general purpose DSP circuit

various ways

associated computer

electromagnetic waves

private network

request

modules

operating environment

cached location information

current location

implementations

other hardware

method embodiments

basic components

several types

cell tower

system bus

processor

mobile device(s) b

bus architectures

user input and system output

output device

devices

basic system components

peripheral bus

notification service application

input device

multiple computing devices

local bus

start-up

type

Network Operating Environment


Drawings

Brief Description:

Figure 1  is a block diagram of an exemplary network operating environment for mobile devices

Detailed Description:

Exemplary Operating Environment 

Figure 1 is a block diagram of an exemplary network operating environment 100 for the mobile devices of Figure 1 through figure 15. Mobile device(s) a 112 and mobile device(s) b 114 can, for example, communicate over one or more wired and/or wireless network(s) 102 in data communication. For example, a wireless network 110, e.g., a cellular network, can communicate with a wide area network 104 (WAN), such as the internet, by use of a gateway 108. Likewise, an access device 106, such as an 802.11g wireless access point, can provide communication access to the wide area network 104

In some implementations, both voice and data communications can be established over wireless network 110 and the access device 106. For example, mobile device(s) a 112 can place and receive phone calls (e.g., using voice over Internet Protocol (VoIP) protocols), send and receive e-mail messages (e.g., using Post Office Protocol 3 (POP3)), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over wireless network 110, gateway 108, and wide area network 104 (e.g., using Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol (UDP)). Likewise, in some implementations, the mobile device(s) b 114 can place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over the access device 106 and the wide area network 104. In some implementations, mobile device(s) a 112 or mobile device(s) b 114 can be physically connected to the access device 106 using one or more cables and the access device 106 can be a personal computer. In this configuration, mobile device(s) a 112 or mobile device(s) b 114 can be referred to as a “tethered” device. 

Mobile device(s) a 112 and mobile device(s) b 114 can also establish communications by other means. For example, wireless device 1802a can communicate with other wireless devices, e.g., other mobile devices, cell phones, etc., over the wireless network 110. Likewise, mobile device(s) a 112 and mobile device(s) b 114 can establish peer-to-peer communications 116, e.g., a personal area network, by use of one or more communication subsystems, such as the Bluetooth.TM. communication devices. Other communication protocols and topologies can also be implemented. 

The mobile device(s) a 112 or mobile device(s) b 114 can, for example, communicate with one or more services, location service(s) 118 and map service 120 over the one or more wired and/or wireless networks. For example, one or more location service(s) 118 can conduct surveys of venues, generate location fingerprint data for each venue, and provide the location fingerprint data to mobile device(s) a 112 or mobile device(s) b 114. Map service 120 can, for example, provide maps of venues, e.g., maps of structures of buildings to mobile device(s) a 112 or mobile device(s) b 114

Mobile device(s) a 112 or mobile device(s) b 114 can also access other data and content over the one or more wired and/or wireless networks. For example, content publishers, such as news sites, Really Simple Syndication (RSS) feeds, web sites, blogs, social networking sites, developer networks, etc., can be accessed by mobile device(s) a 112 or mobile device(s) b 114. Such access can be provided by invocation of a web browsing function or application (e.g., a browser) in response to a user touching, for example, a web object

A number of implementations of the invention have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the invention


Parts List

100

network operating environment

102

wired and/or wireless network(s)

104

wide area network

106

access device

108

gateway

110

wireless network

112

mobile device(s) a

114

mobile device(s) b

116

118

location service(s)

120

map service


Terms/Definitions

web sites

internet

surveys

web object

cellular network

communications

venues

personal computer

voice and data communications

various modifications

topologies

FIGS

Simple Syndication

invention

spirit and scope

provide maps

location service(s)

communication

access device

phone calls

such access

web pages

VoIP

data communication

photographs

blogs

buildings

user touching

social networking sites

wireless network

content publishers

Post Office Protocol

response

configuration

communication access

personal area network

Internet Protocol

block diagram

e-mail messages

Transmission Control Protocol/Internet Protocol

web browsing function or application

wired and/or wireless network(s)

news sites

videos

wide area network (WAN)

location fingerprint data

implementations

venue

peer-to-peer communications

maps

example

mobile device(s) b

User Datagram Protocol

developer networks

structures

number

invocation

mobile device(s) a

wide area network

browser

wireless device

communicate

gateway

voice

electronic documents and/or streams

network operating environment

map service

electronic documents

Network Architecture


Drawings

Brief Description:

Figure 1 illustrates a network architecture in which a group of mobile devices and services communicate over a network

Detailed Description:

As illustrated in Figure 1, a general network topology implemented in one embodiment of the invention can include a group of “client” or “peer” mobile computing devices A-D (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) respectively, communicating with one another and with one or more services (CDX service 104, matchmaker service 106, and invitation service 108) over a network 110. Although illustrated as a single network cloud in Figure 1, the network 110 can include a variety of different components including public networks such as the internet and private networks such as local Wi-Fi networks (e.g., 802.11n home wireless networks or wireless hotspots), local area Ethernet networks, cellular data networks (e.g., 3G, edge, etc), and WiMAX networks, to name a few. For example, mobile device A 112 may be connected to a home Wi-Fi network represented by network link A 120, mobile device B 114 may be connected to a 3G network (e.g., Universal Mobile Telecommunications System (“UMTS”), High-Speed Uplink Packet Access (“HSUPA”), etc) represented by network link B 122, mobile device C 116 may be connected to a WiMAX network represented by network link C 124, and mobile device D 118 may be connected to a public Wi-Fi network represented by network link D 126. Each of the local networklinks (network link A 120, network link B 122, network link C 124, network link D 126) over which the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) are connected may be coupled to a public network such as the internet through a gateway and/or NAT device (not shown in Figure 1), thereby enabling communication between the various mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) over the public network. However, if two mobile devices are on the same local or private network (e.g., the same Wi-Fi network), then the two devices may communicate directly over that local/private network, bypassing the public network. It should be noted, of course, that the underlying principles of the invention are not limited to any particular set of network types or network topologies

Each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) illustrated in Figure 1 can communicate with a connection data exchange (CDX service 104), a matchmaker service 106, and an invitation service 108. In one embodiment, the services (CDX service 104, matchmaker service 106, and invitation service 108) can be implemented as software executed across one or more physical computing devices such as servers. As shown in Figure 1, in one embodiment, the services (CDX service 104, matchmaker service 106, and invitation service 108) may be implemented within the context of a larger data service 102 managed by the same entity (e.g., the same data service provider) and accessible by each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) over the network 110. The data service 102 can include a local area network (e.g., an Ethernet-based LAN) connecting various types of servers and databases. The data service 102 may also include one or more storage area networks (“SANs”) for storing data. In one embodiment, the databases store and manage data related to each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) and the users of those devices (e.g., user account data, device account data, user application data, . . . etc.). 

In one embodiment, matchmaker service 106 can matchtwo or more mobile devices for a collaborative P2P session based on a specified set of conditions. For example, users of two or more of the mobile devices may be interested in playing a particular multi-player game. In such a case, the matchmaker service 106 may identify a group of mobile devices to participate in the game based on variables such as each user’slevel of expertise, the age of each of the users, the timing of the match requests, the particular game for which a match is requested and various game-specific variables. By way of example, and not limitation, the matchmaker service 106 may attempt to matchusers with similar levels of expertise at playing a particular game. Additionally, adults may be matched with other adults and children may be matched with other children. Moreover, the matchmaker service 106 may prioritize user requests based on the order in which those requests are received. The underlying principles of the invention are not limited to any particular set of matching criteria or any particular type of P2P application

As described in detail below, in response to a match request, the matchmaker service 106 can coordinate with the CDX service 104 to ensure that all matched participants receive the necessary connection data for establishing P2P sessions in an efficient and secure manner

In one embodiment, the invitation service 108 also identifies mobile devices for participation in collaborative P2P sessions. However, in the case of the invitation service 108, at least one of the participants is specifically identified by another participant. For example, the user of mobile device A 112 may specifically request a collaborative session with the user of mobile device B 114 (e.g., identifying mobile device B 114 with a user ID or phone number). As with the matchmaker service 106, in response to an invitation request, the invitation service 108 can identify the set of participants and coordinate with the CDX service 104 to ensure that all participants receive the necessary connection data for establishing P2P sessions in an efficient and secure manner

As mentioned above, in one embodiment, the CDX service 104 operates as a central exchange point for connection data required to establish P2P sessions between two or more mobile devices. Specifically, one embodiment of the CDX service generates NAT traversal data (sometimes referred to as “Hole Punch” data) in response to mobile device requests to enable external services and clients to communicate through the NAT of each mobile device (i.e., to “punch a hole” through the NAT to reach the device). For example, in one embodiment, the CDX service detects the external IP address and port needed to communicate with the mobile device and provides this information to the mobile device. In one embodiment, the CDX service also receives and processes lists of mobile devices generated by the matchmaker service 106 and invitation service 108 and efficiently and securely distributes connection data to each of the mobile devices included on the lists (as described in detail below). 

In one embodiment, communication between the mobile devices and the CDX service 104 is established using a relatively lightweight network protocol such as User Datagram Protocol (“UDP”) sockets. As is known by those of skill in the art, UDP socket connections do not require hand-shaking dialogues for guaranteeing packet reliability, ordering, or data integrity and, therefore, do not consume as much packet processing overhead as TCP socket connections. Consequently, UDP’s lightweight, stateless nature is useful for servers that answer small queries from a vast number of clients. Moreover, unlike TCP, UDP is compatible with packet broadcasting (in which packets are sent to all devices on a local network) and multicasting (in which packets are sent to a subset of devices on the local network). As described below, even though UDP may be used, security can be maintained on the CDX service 104 by encrypting NAT traversal data using session keys

In contrast to the low-overhead, lightweight network protocol used by the CDX service 104, in one embodiment, communication between the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) and the matchmaker service 106 and/or invitation service 108 is established with an inherently secure network protocol such as Hypertext Transfer Protocol Secure (“HTTPS”), which relies on Secure Sockets Layer (“SSL”) or Transport Layer Security (“TLS”) connections. Details associated with these protocols are well known by those of skill in the art. 

Specific examples in which mobile devices establish primary and secondary communication channels will now be described with respect to Figure 2. It should be noted, however, that the underlying principles of the invention are not limited to the particular set of communication links and communication channels shown in Figure 2

Brief Description:

Figure 2 illustrates a group of mobile devices connected through primary and secondary communication channels

Detailed Description:

In Figure 2, mobile device A 112 is capable of connecting to a network 110 (e.g., the internet) over communication link (communication link B 210 with NAT device B 206 and over communication link A 206 with NAT device A 202. Similarly, mobile device C 116 is capable of connecting to the network 110 over communication link C 214 with NAT device C 212 and over communication link D 216 with NAT device C 212. By way of example, and not limitation, communication links (communication link B 210 and communication link C 214) may be 3 G communication links and communication links (communication link A 206 and communication link D 216) may be Wi-Fi communication links

Consequently, in this example, there are four different communication channels which may be established between mobile device A 112 and mobile device B 114: a first channel which uses links (communication link B 210 and communication link C 214); a second channel which uses links (communication link B 210 and communication link D 216); a third channel which uses links (communication link A 206 and communication link C 214); and a third channel which uses linkscommunication link A 206 and communication link D 216). In one embodiment, mobile devices A and B will select one of these channels as the primary communication channel based on a prioritization scheme and will select the three remaining channels as backup communication channels. For example, one prioritization scheme may be to select the channel with the highest bandwidth as the primarychannel and to use the remaining channels as the secondary channels. If two or more channels have comparable bandwidth, the prioritization scheme may include selecting the least expensive channel (assuming that the user pays a fee to use one or more of the channels). Alternatively, the prioritization scheme may be to select the least expensive channel as the primarychannel and, if the cost of each channel is the same, to select the highest bandwidth channel. Various different prioritization schemes may be implemented while still complying with the underlying principles of the invention

Mobile device A 112 and mobile device C 116 may utilize the techniques described above to establish the primary communication channel (e.g., by exchanging connection data via the CDX service 104). Alternatively, the mobile device A 112, and mobile device C 116 may implement standard Internet Connectivity Establishment (“ICE”) transactions to exchange the connection data. Regardless of how the primarychannel is established, once it is, the mobile device A 112 and mobile device C 116 may exchange connection data for the secondary communication channels over the primary communication channel. For example, if the primary communication channel in Figure 2 includes communication link A 206 and communication link C 214, then this connection, once established may be used to exchange connection data for secondary communication channels which include communication links (communication link B 210 and communication link C 214). In this example, the connection data exchanged over the primary communication channel may include NAT traversal data and NAT type data for NAT device B 206 and NAT device C 212, including public and private IP addresses/ports for each of the mobile devices

Once the secondary communication channels have been established, they are maintained open using heartbeat packets. For example, device A may periodically transmit a small “heartbeat” packet to device C and/or device A may periodically transmit a small “heartbeat” packet to device C to ensure that the NAT ports used for the secondary channels remain open (NATs will often close ports due to inactivity). The heartbeat packets may be UDP packets with no payload, although the underlying principles of the invention are not limited to any particular packet format. The heartbeat packets may be UDP packets with a self-identifying type field in their payload header, and may contain optional additionally-formatted information including but not limited to a channel time-to-live value

Brief Description:

Figure 3 illustrates a group of mobile devices connected through primary and secondary communication channels 

Detailed Description:

Figure 3 illustrates the same network configuration as shown in Figure 2 with the addition of mobile device B 114 connected directly to the network 110 and connected to mobile device C 116 through a private network 306 connection. The private network 306 may be, for example, a Bluetooth PAN connection between mobile device B 114 and mobile device C 116. It can be seen from this example that switching from a primarychannel to a secondary channel may dramatically alter the network topology

Brief Description:

Figure 4 illustrates the resulting network topologies from Figure 3.

Detailed Description:

For example, as shown in Figure 4, if the primary channel A 402 for the mobile devices include communication link C 214 (resulting in direct connections between devicedevices A, B and C) and the secondary channels include the private network 306, then the network topology may change as illustrated in Figure 4 because the only way for device A and device C to communicate using the private network is through device B. While this is a simplified example with only three devices, a significantly larger number of devices may be used, resulting in a variety of different network topology configurations when switching between primary and secondary communication channels

Brief Description:

Figure 5 illustrates a network architecture in which a group of mobile devices and services, including a registration /directory service 502 and a push notification service 504 communicate over a network 110

Detailed Description:

As illustrated in Figure 5, in addition to the CDX service 104, matchmaker service 106 and invitation service 108 (some embodiments of which are described above), one embodiment of the invention can include a registration /directory service 520, a push notification service 522, and a relay service 516. As mentioned above, in one embodiment, the invitation service 108 and/or the matchmaker service 106 can use the registration /directory service 520 to identify registered mobile devices and the push notification service 522 to push data to the mobile devices. In one embodiment, when a mobile device is activated on the network, it registers a “push token” (sometimes referred to as a “notification service account identifier” in the Push Notification Application) with a database maintained by the registration /directory service 520 by associating the push token with a password protected user ID or a telephone number. If the push token is identified in the registration directory (e.g., by performing a query with the user ID), the push notification service 522 can use the push token to transmit push notifications to a mobile device. In one embodiment, the push notification service is the Apple push notification service (“APNS”) designed by the assignee of the present application and described, for example, in the Push Notification Application referenced above. 


Parts List

102

data service

104

CDX service

106

matchmaker service

108

invitation service

110

network

112

mobile device A

114

mobile device B

116

mobile device C

118

mobile device D

120

122

124

126

202

NAT device A

204

NAT device D

206

NAT device B

208

210

212

NAT device C

214

216

302

private network

304

communication link E

306

communication link F

402

primary channel A

404

primary channel B

502

registration /directory service

504

push notification service

506

relay service


Terms/Definitions

device

other children

same entity

first channel

remaining channels

mobile device B

various different prioritization schemes

subset

particular device configuration

User Datagram Protocol

primary and secondary communication channels

addition

direct connections

hand-shaking dialogues

expertise

primary channel A

skill

same data service provider

Transport Layer Security

servers and databases

users

match request

network topology

case

information

wireless hotspots

standard Internet Connectivity Establishment

P2P sessions

Secure Sockets Layer

links

other adults and children

High-Speed Uplink Packet Access

private network connection

network types or network topologies

NAT traversal data

channels

conditions

invention

secondary channels

CDX service

underlying principles

communication link A

two or more channels

network link C

software

primary communication channel

mobile device A

variety

communication link C

“SANs”

backup communication channels

Ethernet-based LAN

communication channels

secondary communication channels

databases store

vast number

devices

highest bandwidth channel

participation

public network

home Wi-Fi network

phone number

user application data

P2P application

service

connection data

order

switching

security

relay service

stateless nature

game

protocols

third channel

their payload header

various mobile devices 120

general network topology

cellular data networks

802.11n home

such a case

internet

NAT device

local network

requests

network link D

matching criteria

small queries

second channel

WiMAX networks

only three devices

local area Ethernet networks

UDP socket connections

communication link B

3 G communication links

device B

payload

larger data service

mobile device C

link management module

mobile device requests

optional additionally-formatted information

lists

networks

servers

Hypertext Transfer Protocol Secure

NATs

same Wi-Fi network

public Wi-Fi network

low-overhead

NAT device B

particular game

detail

NAT ports

external IP address

channel time-to-live value

NAT device D

connection data exchange

participants

particular multi-player game

mobile devices and services

example

data integrity

various game-specific variables

various types

two or more mobile devices

specified set

clients

private network

mobile device D

response

particular packet format

limitation

small “heartbeat” packet

Universal Mobile Telecommunications System

four different communication channels

edge

collaborative P2P sessions

Bluetooth PAN connection

techniques

two devices

local/private network

timing

“UMTS”

simplified example

user requests

collaborative P2P session

packet broadcasting

network architecture

data service

inherently secure network protocol

NAT device A

similar levels

registration /directory service

external services

3G network

different network topology configurations

NAT type data

heartbeat packets

communication link D

primary

method

network link B

highest bandwidth

lightweight network protocol

invitation request

context

match

local area network

channel

communication link F

specific examples

details

primary channel B

public networks

contrast

single network cloud

group

secure manner

network

socket connections

packets

open using heartbeat packets

UDP packets

different components

bandwidth

course

two mobile devices

efficient

Wi-Fi communication links

user

user ID

matchmaker service

network link A

device C

resulting network topologies

port

user account data

prioritization scheme

relatively lightweight network protocol

collaborative session

push notification service

lightweight

central exchange point

only way

public and private IP addresses/ports

significantly larger number

local Wi-Fi networks

variables

private networks

particular set

WiMAX network

match requests

user’s

ports

invitation service

packet reliability

participant

NAT device C

three remaining channels

level

data

gateway

device account data

self-identifying type field

same network configuration

adults

communication link E

connection

session keys

least expensive channel

matched participants

secondary channel

Augmented Reality


Drawings

Brief Description:

illustrates an embodiment of a superimposing logic 102.

Detailed Description:

Figure 1 illustrates an embodiment of an augmented reality environment 100. A user 110 wearing headset 114 interacts with physical objects virtualized in the augmented reality environment 100. In this example the user 110 interacts with either a purely virtual document, or a physical document that is virtualized as a virtual document 112 on a virtual surface 104 in the augmented reality environment 100. In this embodiment, an imaging sensor 108 is directed toward a physical surface 106, and superimposing logic 102 receives a sensor output 116 (e.g., image or video) from the imaging sensor 108.  Superimposing logic 102 transforms the sensor output 116 into a virtual document 112 superimposed on a virtual surface 104 representing the physical surface 106 in the augmented reality environment 100.

In other embodiments there may be no physical surface 106 and no physical document on the physical surface 106, in which case the environment would be a purely virtual reality (VR) environment, not an augmented reality environment 100. Thus there are many possibilities for the environment – it could be purely virtual, or a physical surface 106 that is virtualized and augmented with a virtual document, or both the physical surface 106 and a physical document could be virtualized.

Brief Description:

illustrates an AR or VR system 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates an AR or VR system 200 in accordance with one embodiment. A

virtual environment 202 receives input from the user 214 and in response sends an interaction signal to a virtual object 206, a virtual surface 210 or an application 212.  The virtual object 206 or virtual surface 210 or application 212 sends an action to an operating system 204 and in response the operating system 204 operates the hardware 208 to implement the action in the augmented or virtual environment.  

Brief Description:

illustrates a device 300 in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates a perspective view of a wearable augmented reality (“AR”) device ( device 300), from the perspective of a wearer of the device 300 (“AR user”). The device 300 is a computer device in the form of a wearable headset. 

The device 300 comprises a headpiece 302, which is a headband, arranged to be worn on the wearer’s head. The headpiece 302 has a central portion 304 intended to fit over the nose bridge of a wearer, and has an inner curvature intended to wrap around the wearer’s head above their ears.

The headpiece 302 supports a left optical component 306 and a right optical component 308, which are waveguides. For ease of reference herein an optical component will be considered to be either a left or right component, because in the described embodiment the components are essentially identical apart from being mirror images of each other. Therefore, all description pertaining to the left-hand component also pertains to the right-hand component. The device 300 comprises augmented reality device logic 400 that is depicted in Figure 4.

The augmented reality device logic 400 comprises a graphics engine 402, which may comprise a micro display and imaging optics in the form of a collimating lens (not shown). The micro display can be any type of image source, such as liquid crystal on silicon (LCOS) displays, transmissive liquid crystal displays (LCD), matrix arrays of LED’s (whether organic or inorganic) and any other suitable display. The display is driven by circuitry known in the art to activate individual pixels of the display to generate an image. Substantially collimated light, from each pixel, falls on an exit pupil of the graphics engine 402. At the exit pupil, the collimated light beams are coupled into each of the left optical component 306 and the right optical component 308 into a respective left in-coupling zone 310 and rightin-coupling zone 312. In-coupled light is then guided, through a mechanism that involves diffraction and TIR, laterally of the optical component in a respective left intermediate zone 314 and 416, and also downward into a respective left exit zone 318 and right exit zone 320 where it exits towards the users’ eye. 

The collimating lens collimates the image into a plurality of beams, which form a virtual version of the displayed image, the virtual version being a virtual image at infinity in the optics sense. The light exits as a plurality of beams, corresponding to the input beams and forming substantially the same virtual image, which the lens of the eye projects onto the retina to form a real image visible to the user. In this manner, the left optical component 306 and the right optical component 308 project the displayed image onto the wearer’s eyes. 

The various optical zones can, for example, be suitably arranged diffractions gratings or holograms. Each optical component has a refractive index n which is such that total internal reflection takes place to guide the beam from the light engine along the respective intermediate expansion zone, and down towards respective the exit zone.

Each optical component is substantially transparent, whereby the wearer can see through it to view a real-world environment in which they are located simultaneously with the projected image, thereby providing an augmented reality experience.

To provide a stereoscopic image, i.e. that is perceived as having 3D structure by the user, slightly different versions of a 2D image can be projected onto each eyefor  example from multiple graphics engine 402  (i.e. two micro displays), or from the same light engine (i.e. one micro display) using suitable optics to split the light output from the single display.

The device 300 is just one exemplary configuration. For instance, where two light-engines are used, these may instead be at separate locations to the right and left of the device (near the wearer’s ears). Moreover, whilst in this example, the input beams that form the virtual image are generated by collimating light from the display, an alternative light engine based on so-called scanning can replicate this effect with a single beam, the orientation of which is fast modulated whilst simultaneously modulating its intensity and/or colour. A virtual image can be simulated in this manner that is equivalent to a virtual image that would be created by collimating light of a (real) image on a display with collimating optics. Alternatively, a similar AR experience can be provided by embedding substantially transparent pixels in a glass or polymer plate in front of the wearer’s eyes, having a similar configuration to the left optical component 306 and right optical component 308 though without the need for the zone structures.

Other headpiece 302 embodiments are also within the scope of the subject matter. For instance, the display optics can equally be attached to the users head using a frame (in the manner of conventional spectacles), helmet or other fit system. The purpose of the fit system is to support the display and provide stability to the display and other head borne systems such as tracking systems and cameras. The fit system can be designed to meet user population in anthropometric range and head morphology and provide comfortable support of the display system.

The device 300 also comprises one or more camera 404 — for example left stereo camera 322 and right stereo camera 324 mounted on the headpiece 302 and configured to capture an approximate view (“field of view”) from the user’s left and right eyes respectfully in this example. The cameras are located towards either side of the user’s head on the headpiece 302, and thus capture images of the scene forward of the device form slightly different perspectives. In combination, the stereo camera‘s capture a stereoscopic moving image of the real-wold environment as the device moves through it. A stereoscopic moving image means two moving images showing slightly different perspectives of the same scene, each formed of a temporal sequence of frames to be played out in quick succession to replicate movement. When combined, the two images give the impression of moving 3D structure.

A left microphone 326 and a right microphone 328 are located at the front of the headpiece (from the perspective of the wearer), and left and right channel speakers, earpiece or other audio output transducers are to the left and right of the headpiece 302. These are in the form of a pair of bone conduction audio transducers functioning as a left speaker 330 and right speaker 332 audio channel output.

Brief Description:

illustrates an augmented reality device logic 400 in accordance with one embodiment.

Detailed Description:

Figure 4 illustrates components of an exemplary augmented reality device logic 400. The augmented reality device logic 400 comprises a graphics engine 402, a camera 404, processing units 406, including one or more CPU 408 and/or GPU 410, a WiFi 412 wireless interface, a Bluetooth 414 wireless interface, speakers 416microphones 418, and one or more memory 420.

The processing units 406 may in some cases comprise programmable devices such as bespoke processing units optimized for a particular function, such as AR related functions. The augmented reality device logic 400 may comprise other components that are not shown, such as dedicated depth sensors, additional interfaces etc.

 

Some or all of the components in Figure 4 may be housed in an AR headset. In some embodiments, some of these components may be housed in a separate housing connected or in wireless communication with the components of the AR headset. For example, a separate housing for some components may be designed to be worn or a belt or to fit in the wearer’s pocket, or one or more of the components may be housed in a separate computer device (smartphone, tablet, laptop or desktop computer etc.) which communicates wirelessly with the display and camera apparatus in the AR headset, whereby the headset and separate device constitute the full augmented reality device logic 400.

The memory 420 comprises logic 422 to be applied to the processing units 406 to execute. In some cases, different parts of the logic 422 may be executed by different components of the processing units 406. The logic 422 typically comprises code of an operating system, as well as code of one or more applications configured to run on the operating system to carry out aspects of the processes disclosed herein.

Brief Description:

illustrates an AR device 500 that may implement aspects of the machine processes described herein.

Detailed Description:

Figure 5 illustrates more aspects of an AR device 500 according to one embodiment.  The AR device 500 comprises processing units 502, input devices 504, memory 506,  output devices 508, storage devices 510, a network interface 512, and various logic to carry out the processes disclosed herein.

The input devices 504 comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 504 are keyboards which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices 504 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 506

The memory 506 provides for storage (via configuration of matter or states of matter) of signals received from the input devices 504, instructions and information for controlling operation of the processing units 502, and signals from storage devices 510. The memory 506 may in fact comprise multiple memory devices of different types, for example random access memory devices and non-volatile (e.g., FLASH memory) devices.

Information stored in the memory 506 is typically directly accessible to the processing units 502 of the device. Signals input to the AR device 500 cause the reconfiguration of the internal material/energy state of the memory 506, creating logic that in essence forms a new machine configuration, influencing the behavior of the AR device 500 by affecting the behavior of the processing units 502 with control signals (instructions) and data provided in conjunction with the control signals. 

The storage devices 510 may provide a slower but higher capacity machine memory capability. Examples of storage devices 510 are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories. 

The processing units 502 may cause the configuration of the memory 506 to be altered by signals in the storage devices 510. In other words, the processing units 502 may cause data and instructions to be read from storage devices 510 in the memory 506 from which may then influence the operations of processing units 502 as instructions and data signals, and from which it may also be provided to the output devices 508. The processing units 502 may alter the content of the memory 506 by signaling to a machine interface of memory 506 to alter the internal configuration, and then converted signals to the storage devices 510 to alter its material internal configuration. In other words, data and instructions may be backed up from memory 506, which is often volatile, to storage devices 510, which are often non-volatile.

Output devices 508 are transducers which convert signals received from the memory 506 into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers).  

The network interface 512 receives signals from the memory 506 or processing units 502  and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 512 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 506 or processing units 502.

Brief Description:

illustrates an AR device logic 600 in accordance with one embodiment.

Detailed Description:

Figure 6 illustrates a functional block diagram of an embodiment of AR device logic 600. The AR device logic 600 comprises the following functional modules: a rendering engine 616local augmentation logic 614, local modeling logic 608, a model aggregator 616 (deleted)device tracking logic 606, an encoder 612, and a decoder 620. Each of these functional modules may be implemented in software, dedicated hardware, firmware, or a combination of these logic types.

The rendering engine 616 controls the graphics engine 618 to generate a stereoscopic image visible to the wearer, i.e. to generate slightly different images that are projected onto different eyes by the optical components of a headset substantially simultaneously, so as to create the impression of 3D structure.

The stereoscopic image is formed by rendering engine 616 rendering at least one virtual display element (“augmentation”), which is perceived as a 3D element, i.e. having perceived 3D structure, at a real-world location in 3D space by the user.

An augmentation is defined by an augmentation object stored in the memory 602. The augmentation object comprises: location data defining a desired location in 3D space for the virtual element (e.g. as (x,y,z) Cartesian coordinates); structural data defining 3D surface structure of the virtual element, i.e. a 3D model of the virtual element; and image data defining 2D surface texture of the virtual element to be applied to the surfaces defined by the 3D model. The augmentation object may comprise additional information, such as a desired orientation of the augmentation.

The perceived 3D effects are achieved though suitable rendering of the augmentation object. To give the impression of the augmentation having 3D structure, a stereoscopic image is generated based on the 2D surface and 3D augmentation model data in the data object, with the augmentation being rendered to appear at the desired location in the stereoscopic image.

A 3D model of a physical object is used to give the impression of the real-world having expected tangible effects on the augmentation, in the way that it would a real-world object. The 3D model represents structure present in the real world, and the information it provides about this structure allows an augmentation to be displayed as though it were a real-world 3D object, thereby providing an immersive augmented reality experience. The 3D model is in the form of 3D mesh.

For example, based on the model of the real-world, an impression can be given of the augmentation being obscured by a real-world object that is in front of its perceived location from the perspective of the user; dynamically interacting with a real-world object, e.g. by moving around the object; statically interacting with a real-world object, say by sitting on top of it etc.

Whether or not real-world structure should affect an augmentation can be determined based on suitable rendering criteria. For example, by creating a 3D model of the perceived AR world, which includes the real-world surface structure and any augmentations, and projecting it onto a plane along the AR user’s line of sight as determined using pose tracking (see below), a suitable criteria for determining whether a real-world object should be perceived as partially obscuring an augmentation is whether the projection of the real-world object in the plane overlaps with the projection of the augmentation, which could be further refined to account for transparent or opaque real world structures. Generally the criteria can depend on the location and/or orientation of the augmented reality device and/or the real-world structure in question.

An augmentation can also be mapped to the mesh, in the sense that its desired location and/or orientation is defined relative to a certain structure(s) in the mesh. Should that structure move and/or rotate causing a corresponding change in the mesh, when rendered properly this will cause corresponding change in the location and/or orientation of the augmentation. For example, the desired location of an augmentation may be on, and defined relative to, a table top structure; should the table be moved, the augmentation moves with it. Object recognition can be used to this end, for example to recognize a known shape of table and thereby detect when the table has moved using its recognizable structure. Such object recognition techniques are known in the art.

An augmentation that is mapped to the mash in this manner, or is otherwise associated with a particular piece of surface structure embodied in a 3D model, is referred to an “annotation” to that piece of surface structure. In order to annotate a piece of real-world surface structure, it is necessary to have that surface structure represented by the 3D model in question—without this, the real-world structure cannot be annotated.

The local modeling logic 608 generates a local 3D model “LM” of the environment in the memory 602, using the AR device’s own sensor(s) e.g. cameras 610 and/or any dedicated depth sensors etc. The local modeling logic 608 and sensor(s) constitute sensing apparatus.

The device tracking logic 606 tracks the location and orientation of the AR device, e.g. a headset, using local sensor readings captured from the AR device. The sensor readings can be captured in a number of ways, for example using the cameras 610  and/or other sensor(s) such as accelerometers. The device tracking logic 606 determines the current location and orientation of the AR device and provides this information to the rendering engine 616, for example by outputting a current “pose vector” of the AR device. The pose vector is a six dimensional vector, for example (x, y, z, P, R, Y) where (x,y,z) are the device’s Cartesian coordinates with respect to a suitable origin, and (P, R, Y) are the device’s pitch, roll and yaw with respect to suitable reference axes.

The rendering engine 616 adapts the local model based on the tracking, to account for the movement of the device i.e. to maintain the perception of the as 3D elements occupying the real-world, for example to ensure that static augmentations appear to remain static (which will in fact be achieved by scaling or rotating them as, from the AR user’s perspective, the environment is moving relative to them).

The encoder 612 receives image data from the cameras 610 and audio data from the microphones 604 and possibly other types of data (e.g., annotation or text generated by the user of the AR device using the local augmentation logic 614) and transmits that infomation to other devices, for example the devices of collaborators in the AR environment. The decoder 620 receives an incoming data stream from other devices, and extracts audio, video, and possibly other types of data (e.g., annotations, text) therefrom.


Parts List

100

augmented reality environment

102

superimposing logic

104

virtual surface

106

physical surface

108

imaging sensor

110

user

112

virtual document

114

headset

116

sensor output

200

AR or VR system

202

virtual environment

204

operating system

206

virtual object

208

hardware

210

virtual surface

212

application

214

user

300

device

302

headpiece

304

central portion

306

left optical component

308

right optical component

310

left in-coupling zone

312

rightin-coupling zone

314

left intermediate zone

316

right intermediate zone

318

left exit zone

320

right exit zone

322

left stereo camera

324

right stereo camera

326

left microphone

328

right microphone

330

left speaker

332

right speaker

400

augmented reality device logic

402

graphics engine

404

camera

406

processing units

408

CPU

410

GPU

412

WiFi

414

Bluetooth

416

speakers

418

microphones

420

memory

422

logic

500

AR device

502

processing units

504

input devices

506

memory

508

output devices

510

storage devices

512

network interface

514

logic

516

logic

518

logic

520

logic

600

AR device logic

602

memory

604

microphones

606

device tracking logic

608

local modeling logic

610

cameras

612

encoder

614

local augmentation logic

616

rendering engine

618

graphics engine

620

decoder

622

speakers


Terms/Definitions

virtual surface

projection location

camera

imaging sensor

texture image

logic

filtered texture

virtual environment

interaction

system

virtual reality

the computer-generated simulation of a three-dimensional environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a headset with a display and gloves fitted with sensors.

augmented reality

technology that superimposes computer-generated imagery on a user’s view of the real world, thus providing a composite view.

virtualize

converting a physical thing to a computer-generated simulation of that thing.