Organization: Public

Cloud Storage Scanning Implementation


Drawings

Brief Description:

illustrates an example system 100 in accordance with one embodiment.

Detailed Description:

Referring now to Figure 1, an example system 100 includes a cloud infrastructure 102 that includes a scanning service 104 and virtual machines running on one or more virtual private cloud 114. Applications instantiated on virtual machines within the virtual private cloud 114 may access one or more clouddata stores 106 for storage of data. Administrators may configure the virtual private cloud in zones, and may architect applications to store in and receive data from the cloud data store 106 so as to provide fault tolerance and availability

While the cloud infrastructure 102, scanning service 104 and virtual machine instances in the virtual private cloud 114 may be described with respect to cloud-based infrastructure generally, and respect to Amazon Web Services (AWS) and AWS s3 buckets as an example implementation, it should be understood that the architecture and concept may be used with any suitable cloud service and related storage system. For example, cloud infrastructure services available from Microsoft Azure, CenturyLink Cloud, VMware, rackspace, joyent, and google may be suitable in various implementations as well as other cloud infrastructure or infrastructure-as-a-service providers with adjustments or modifications as may be needed for a particular implementation

Deployment of the scanning service 104 may be accomplished with a workflow that is intended to be relatively simple for an administrator to initiate and manage, and relieve a requirement to deploy and manage an agent on application templates or instances as they are created. This may enable, for example, in some implementations, a usage-based billing model as compared to a per-seat license for each image created, which may be desirable with cloud billing models, and particularly in an auto-scaling environment. As instances are created and shut down, instantiations of the scanning service 104 may be based on the load on the scanning service 104load, and may be managed, for example, by the security manager 116, rather than the administrator of the applications running in the virtual private cloud 114

In some implementations, installation and registration may involve setting permissions and authentication configuration, so that a cloud scanning provider handles administration of the scanning application and datasets without additional impact to customers’ workflows. This reduces complexity for the application administrator when adding data protection capability to applications

The system 100 includes a cloud infrastructure service 102 that provides computing resources for execution of software applications, data storage, and resource management, and may provide other services as well. In an example implementation, the cloud infrastructure 102 is implemented with the AWS service, although as mentioned above other suitable cloud infrastructure services may be used. 

The cloud infrastructure service 102 may include a cloud data store 106. The cloud data store 106 may be used by applications within the cloud infrastructure 102 to store data. The cloud data store 106 may be used, for example, by a web application operating within the cloud infrastructure 102 to store files uploaded to the web application by a user. The cloud data store 106 may receive one or more files directly or indirectly from applications, such as mobile apps, operating on user device 122 or a mobile device 322B. In an AWS implementation, the cloud data store 106 may be implemented with the AWS Simple Storage Service S3. Other clouddata services may be used instead or in addition

The cloud infrastructure 102 may include a scanning service 104. The scanning service 104 may be implemented with one or more scanningapplications running on one or more virtual machines within the cloud infrastructure 102. The scanning service 104 may receive policies from a security manager 116 and also may provide status information, events, and alerts to the security manager 116

The security manager 116 may be implemented within the cloud infrastructure 102 or outside the cloud infrastructure 102. The security manager 116 may provide a web-based management interface for configuration of the scanning service 104 and for an administrator to manage their use of the scanning service 104 and potentially other security applications. For example, the security manager 116 may provide management for endpoint protection, firewalls, and so forth. In some implementations, one or more firewalls under management by the security manager 116 are included in the virtual private cloud 114 and may be managed by the security manager 116

The scanning service 104 may receive data updates from a datadistributionservice (DDS 118). data from the DDS 118 may include, for example, code updates and definitions of known or potential malicious files, portions of files, code, or content, or code that may be used to identify malicious files, applications, or the like. The definition files may contain one or more commands, definitions, patterns, or instructions, to be parsed and acted upon, matched, or the like. Patterns may include, for example, identifying files or portions of files that fit a specific pattern, or that were identified in malicious files. Patterns also may include, for example, identifying code that has the same effect of code that is known to be malicious. The data updates may be used by the scanning service 104 when scanningfiles

The scanning service 104 may exchange security-related information, such as files or portions of files and resource reputation information with a security datadata lookup service 120. The security datadata lookup service 120 may be provided within the cloud infrastructure 102 or outside the cloud infrastructure 102. The security datadata lookup service 120 may be used, for example, to check patterns identified by the scanning service 104, determine reputations of resources identified or provided by the scanning service 104, and so forth. The scanning service 104 may provide files or data to the data lookup service 120 for further analysis. In some implementations, the scanning service 104 may initiate sending data to the data lookup service 120 under a variety of circumstances, for example, if the scanning service 104 is unable to determine whether a file or a portion of a file is malicious, or the relevance of code or other content, or if the reputation of a file is unknown. The data lookup service 120 may request a file or data to be provided to the data lookup service 120 for further investigation

The scanning service 104 accesses files to be scanned directly from the cloud data store 106 that is used by the virtual machine instances, which avoids overhead and performance delay. The use of virtual machine instances for the scanning that are different from the virtual machine instances of the application facilitate management and reduce complexity. In some implementations, the scanning service provides alerts to an administrator, but does not attempt to control access to files. In some implementations, the scanning service may move files or change the name of files in order to control access. For example, to prevent access to a file, the scanning service may change the name or the location (e.g., path in a file system) of a file in order to prevent access. In some implementations, the scanning service 104 may replace a file with another file that is “clean.” 

In some implementations, file permissions are used to control use of files. For example, if the scanning service 104 has been configured with an account having the appropriate permissions, the scanning service 104 may change the permission of files in the cloud data store 106 to permit or deny access to files by the applications in the virtual private cloud 114. Use of file permissions to control file access provides security for data, without a need for lengthy setup or installation. This reduces the costs to deploy and provision and takes advantage of the benefits of the cloud, which is to distribute processing and avoid the need for custom infrastructure, which in turn reduces total cost of ownership for cloud applications

In some implementations in which permissions are used, an application stores a file in the cloud data store 106 with default permissions that permit access by applications running in the virtual private cloud 114. The scanning service 104 receives notification of the storage event from the cloud data store 106, and the scanning service 104 scans the file. If access to the file needs to be restricted based on the scan, the scanning service 104 changes the permission of the file so that applications in the virtual private cloud 114 can no longer access the file

In some implementations in which permissions are used, an application stores a file in the cloud data store 106 with default permissions that do not permit access by applications running in the virtual private cloud 114. The scanning service 104 receives notification of the storage event from the cloud data store 106, and the scanning service 104 scans the file. If access to the file needs to be restricted based on the scan, the scanning service 104 does not change the permission of the file so that applications in the virtual private cloud 114 still cannot access the file. If access to the file does not need to be restricted based on the scan, the scanning service 104 changes the permission of the file so that applications in the virtual private cloud 114 may access the file

The cloud infrastructure 102 may include a virtual private cloud 114 (VPC) including one or more computing resources on which virtual machine instances are implemented. For example, the virtual private cloud 114 may include one or more applications such as a software application, a web application, a virtual desktop, a server application, etc. Applications in the virtual private cloud 114 may access and store data in the cloud data store 106, depending on the permissions assigned to the files in the cloud data store 106. In some implementations, some or all of applications may be implemented on infrastructure inside or outside of the cloud. For example, applications may be implemented in a co-location facility or in a data center not associated with a cloud infrastructure. For example, applications may be implemented on a user device, such as a mobile app or desktop computer application. Applications implemented outside of the cloud may make use of cloud resources, such as cloud storage. Use of the scanning techniques described with respect to cloud storage may be useful even if the applications are partially or entirely implemented outside of the cloud infrastructure, for example, with the exception of the cloud storage

User devices 122 may be in communication with the data store. The user devices 122 may have applications that directly store data in the data store 106. The user devices 122 may be in communication with one or more applications in the virtual private cloud 114, which in turn store data in the data store 106. 

An example is presented in which the cloud data store 106 includes three files; clean files 108, clean files 110 and malicious file 112. The clean files 108, clean files 110, malicious file 112 may be any sort of data file or collection of data files (e.g., a word processing file, an image, a video, an archive collection of files, etc.). In this example, there may be a first clean file 108 and a second clean files 110. The clean files 108, clean files 110 may be clean in the sense that they do not contain content that would be identified by the scanning service 104 to require reporting or restriction. The cloud data store 106 also includes a third malicious file 112, which contains content may be identified by the scanning service 104 to require restriction of file access. For example, the malicious file 112 may include malware or other malicious content. For example, the file may include content that should be protected from distribution under a policy

In some implementations, access to the malicious file 112 by applications running on the virtual private cloud 114 may be prevented through the use of permissions associated with the malicious file 112 within the data store 106, while the clean files 108, clean files 110 may have other permissions assigned and so applications running the on the virtual private cloud 114 would not be blocked. As a result, applications running on the VPC may access the clean files 108, clean files 110 but not access the malicious file 112. In some implementations, the file names of the clean files 108, clean files 110 are not changed, but the file name of the malicious file 112 is changed such that applications running on the virtual private cloud 114 cannot access the malicious file 112. In some implementations, the clean files 108, clean files 110 are not moved, but the malicious file 112 is moved such that applications running on the virtual private cloud 114 cannot access the malicious file 112


Parts List

100

example system

102

cloud infrastructure

104

scanning service

106

cloud data store

108

clean files

110

clean files

112

malicious file

114

virtual private cloud

116

security manager

118

DDS

120

data lookup service

122

user devices


Terms/Definitions

file name

instantiations

web-based management interface

google

infrastructure-as-a-service providers

files or portions

elements

distribution

particular embodiments

security-related information

ordinary skill

further analysis

advantage

video

implementations

functional information one

data store

diamond

application stores

circuits

same effect

three files

code

Microsoft Azure

application templates or instances

digital signal processor circuit

malicious file

desirable order

relevance

firewalls

virtual private cloud (VPC)

malicious files

instructions

cloud-based infrastructure

virtual private clouds

syntax

account

joyent

file system

reporting or restriction

sort

s3 buckets

service

scanning service

clean files

potential malicious files

AWS service

alerts

data

their use

user devices

third file

server application

benefits

functionally equivalent circuits

cloud infrastructure service

particular programming language

default permissions

policies

co-location facility

total cost

mobile app

configuration

workflow

facility

portions

Amazon Web Services

administration

security data

turn

instances

initialization

per-seat license

second clean file

many routine program elements

AWS implementation

administrator

virtual desktop

reputation

permission

various implementations

Data Distribution Service (DDS)

AWS Simple Storage Service

rackspace

software applications

patterns

circumstances

application administrator

file names

hardware implementation

management

present invention

storage

computer software instructions or groups

computer software instructions

communication

customers’ workflows

mobile device

flow diagrams

example implementation

software application

example system

files or data

presently disclosed methods

store files

loops and variables

access

virtual private cloud

invention

agent

adjustments or modifications

cloud

permissions

location

first clean file

particular implementation

data protection capability

applications

VMware

word processing file

application

scanning techniques

resource management

cloud infrastructure services

computing resources

cloud resources

deploy and provision

additional impact

cloud scanning provider

reputations

cloud storage

specific pattern

block diagram

related storage system

endpoint protection

costs

cloud billing models

auto-scaling environment

requirement

custom infrastructure

infrastructure

exception

code updates and definitions

lengthy setup or installation

complexity

mobile apps

scan

virtual machines

storage event

portion

events

web application

unordered meaning

user

data file or collection

status information

resources

cloud infrastructure

figure

definition files

particular sequence

processing

data center

administrators

policy

potentially other security applications

restriction

application facilitate management

file

appropriate permissions

implementation

cloud applications

delay

processing and decision blocks

security

processing blocks

file permissions

virtual machine instances

computer software

result

data lookup service

specific integrated circuit

example

cloud data store

archive collection

file access

need

notification

respect

zones

spirit

execution

load

content

security manager

suitable cloud service

data storage

usage-based billing model

ownership

data lookup

steps

image

architecture and concept

user device

desktop computer application

groups

temporary variables

files

rectangular elements

fault tolerance and availability

installation and registration

file or data

permissions and authentication configuration

addition

CenturyLink Cloud

files and resource reputation information

further investigation

system

variety

scanning

name

data files

scanning application and datasets

data updates

malware

Network Architecture


Drawings

Brief Description:

Figure 1 illustrates a network architecture in which a group of mobile devices and services communicate over a network

Detailed Description:

As illustrated in Figure 1, a general network topology implemented in one embodiment of the invention can include a group of “client” or “peer” mobile computing devices A-D (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) respectively, communicating with one another and with one or more services (CDX service 104, matchmaker service 106, and invitation service 108) over a network 110. Although illustrated as a single network cloud in Figure 1, the network 110 can include a variety of different components including public networks such as the internet and private networks such as local Wi-Fi networks (e.g., 802.11n home wireless networks or wireless hotspots), local area Ethernet networks, cellular data networks (e.g., 3G, edge, etc), and WiMAX networks, to name a few. For example, mobile device A 112 may be connected to a home Wi-Fi network represented by network link A 120, mobile device B 114 may be connected to a 3G network (e.g., Universal Mobile Telecommunications System (“UMTS”), High-Speed Uplink Packet Access (“HSUPA”), etc) represented by network link B 122, mobile device C 116 may be connected to a WiMAX network represented by network link C 124, and mobile device D 118 may be connected to a public Wi-Fi network represented by network link D 126. Each of the local networklinks (network link A 120, network link B 122, network link C 124, network link D 126) over which the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) are connected may be coupled to a public network such as the internet through a gateway and/or NAT device (not shown in Figure 1), thereby enabling communication between the various mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) over the public network. However, if two mobile devices are on the same local or private network (e.g., the same Wi-Fi network), then the two devices may communicate directly over that local/private network, bypassing the public network. It should be noted, of course, that the underlying principles of the invention are not limited to any particular set of network types or network topologies

Each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) illustrated in Figure 1 can communicate with a connection data exchange (CDX service 104), a matchmaker service 106, and an invitation service 108. In one embodiment, the services (CDX service 104, matchmaker service 106, and invitation service 108) can be implemented as software executed across one or more physical computing devices such as servers. As shown in Figure 1, in one embodiment, the services (CDX service 104, matchmaker service 106, and invitation service 108) may be implemented within the context of a larger data service 102 managed by the same entity (e.g., the same data service provider) and accessible by each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) over the network 110. The data service 102 can include a local area network (e.g., an Ethernet-based LAN) connecting various types of servers and databases. The data service 102 may also include one or more storage area networks (“SANs”) for storing data. In one embodiment, the databases store and manage data related to each of the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) and the users of those devices (e.g., user account data, device account data, user application data, . . . etc.). 

In one embodiment, matchmaker service 106 can matchtwo or more mobile devices for a collaborative P2P session based on a specified set of conditions. For example, users of two or more of the mobile devices may be interested in playing a particular multi-player game. In such a case, the matchmaker service 106 may identify a group of mobile devices to participate in the game based on variables such as each user’slevel of expertise, the age of each of the users, the timing of the match requests, the particular game for which a match is requested and various game-specific variables. By way of example, and not limitation, the matchmaker service 106 may attempt to matchusers with similar levels of expertise at playing a particular game. Additionally, adults may be matched with other adults and children may be matched with other children. Moreover, the matchmaker service 106 may prioritize user requests based on the order in which those requests are received. The underlying principles of the invention are not limited to any particular set of matching criteria or any particular type of P2P application

As described in detail below, in response to a match request, the matchmaker service 106 can coordinate with the CDX service 104 to ensure that all matched participants receive the necessary connection data for establishing P2P sessions in an efficient and secure manner

In one embodiment, the invitation service 108 also identifies mobile devices for participation in collaborative P2P sessions. However, in the case of the invitation service 108, at least one of the participants is specifically identified by another participant. For example, the user of mobile device A 112 may specifically request a collaborative session with the user of mobile device B 114 (e.g., identifying mobile device B 114 with a user ID or phone number). As with the matchmaker service 106, in response to an invitation request, the invitation service 108 can identify the set of participants and coordinate with the CDX service 104 to ensure that all participants receive the necessary connection data for establishing P2P sessions in an efficient and secure manner

As mentioned above, in one embodiment, the CDX service 104 operates as a central exchange point for connection data required to establish P2P sessions between two or more mobile devices. Specifically, one embodiment of the CDX service generates NAT traversal data (sometimes referred to as “Hole Punch” data) in response to mobile device requests to enable external services and clients to communicate through the NAT of each mobile device (i.e., to “punch a hole” through the NAT to reach the device). For example, in one embodiment, the CDX service detects the external IP address and port needed to communicate with the mobile device and provides this information to the mobile device. In one embodiment, the CDX service also receives and processes lists of mobile devices generated by the matchmaker service 106 and invitation service 108 and efficiently and securely distributes connection data to each of the mobile devices included on the lists (as described in detail below). 

In one embodiment, communication between the mobile devices and the CDX service 104 is established using a relatively lightweight network protocol such as User Datagram Protocol (“UDP”) sockets. As is known by those of skill in the art, UDP socket connections do not require hand-shaking dialogues for guaranteeing packet reliability, ordering, or data integrity and, therefore, do not consume as much packet processing overhead as TCP socket connections. Consequently, UDP’s lightweight, stateless nature is useful for servers that answer small queries from a vast number of clients. Moreover, unlike TCP, UDP is compatible with packet broadcasting (in which packets are sent to all devices on a local network) and multicasting (in which packets are sent to a subset of devices on the local network). As described below, even though UDP may be used, security can be maintained on the CDX service 104 by encrypting NAT traversal data using session keys

In contrast to the low-overhead, lightweight network protocol used by the CDX service 104, in one embodiment, communication between the mobile devices (mobile device A 112, mobile device B 114, mobile device C 116, mobile device D 118) and the matchmaker service 106 and/or invitation service 108 is established with an inherently secure network protocol such as Hypertext Transfer Protocol Secure (“HTTPS”), which relies on Secure Sockets Layer (“SSL”) or Transport Layer Security (“TLS”) connections. Details associated with these protocols are well known by those of skill in the art. 

Specific examples in which mobile devices establish primary and secondary communication channels will now be described with respect to Figure 2. It should be noted, however, that the underlying principles of the invention are not limited to the particular set of communication links and communication channels shown in Figure 2

Brief Description:

Figure 2 illustrates a group of mobile devices connected through primary and secondary communication channels

Detailed Description:

In Figure 2, mobile device A 112 is capable of connecting to a network 110 (e.g., the internet) over communication link (communication link B 210 with NAT device B 206 and over communication link A 206 with NAT device A 202. Similarly, mobile device C 116 is capable of connecting to the network 110 over communication link C 214 with NAT device C 212 and over communication link D 216 with NAT device C 212. By way of example, and not limitation, communication links (communication link B 210 and communication link C 214) may be 3 G communication links and communication links (communication link A 206 and communication link D 216) may be Wi-Fi communication links

Consequently, in this example, there are four different communication channels which may be established between mobile device A 112 and mobile device B 114: a first channel which uses links (communication link B 210 and communication link C 214); a second channel which uses links (communication link B 210 and communication link D 216); a third channel which uses links (communication link A 206 and communication link C 214); and a third channel which uses linkscommunication link A 206 and communication link D 216). In one embodiment, mobile devices A and B will select one of these channels as the primary communication channel based on a prioritization scheme and will select the three remaining channels as backup communication channels. For example, one prioritization scheme may be to select the channel with the highest bandwidth as the primarychannel and to use the remaining channels as the secondary channels. If two or more channels have comparable bandwidth, the prioritization scheme may include selecting the least expensive channel (assuming that the user pays a fee to use one or more of the channels). Alternatively, the prioritization scheme may be to select the least expensive channel as the primarychannel and, if the cost of each channel is the same, to select the highest bandwidth channel. Various different prioritization schemes may be implemented while still complying with the underlying principles of the invention

Mobile device A 112 and mobile device C 116 may utilize the techniques described above to establish the primary communication channel (e.g., by exchanging connection data via the CDX service 104). Alternatively, the mobile device A 112, and mobile device C 116 may implement standard Internet Connectivity Establishment (“ICE”) transactions to exchange the connection data. Regardless of how the primarychannel is established, once it is, the mobile device A 112 and mobile device C 116 may exchange connection data for the secondary communication channels over the primary communication channel. For example, if the primary communication channel in Figure 2 includes communication link A 206 and communication link C 214, then this connection, once established may be used to exchange connection data for secondary communication channels which include communication links (communication link B 210 and communication link C 214). In this example, the connection data exchanged over the primary communication channel may include NAT traversal data and NAT type data for NAT device B 206 and NAT device C 212, including public and private IP addresses/ports for each of the mobile devices

Once the secondary communication channels have been established, they are maintained open using heartbeat packets. For example, device A may periodically transmit a small “heartbeat” packet to device C and/or device A may periodically transmit a small “heartbeat” packet to device C to ensure that the NAT ports used for the secondary channels remain open (NATs will often close ports due to inactivity). The heartbeat packets may be UDP packets with no payload, although the underlying principles of the invention are not limited to any particular packet format. The heartbeat packets may be UDP packets with a self-identifying type field in their payload header, and may contain optional additionally-formatted information including but not limited to a channel time-to-live value

Brief Description:

Figure 3 illustrates a group of mobile devices connected through primary and secondary communication channels 

Detailed Description:

Figure 3 illustrates the same network configuration as shown in Figure 2 with the addition of mobile device B 114 connected directly to the network 110 and connected to mobile device C 116 through a private network 306 connection. The private network 306 may be, for example, a Bluetooth PAN connection between mobile device B 114 and mobile device C 116. It can be seen from this example that switching from a primarychannel to a secondary channel may dramatically alter the network topology

Brief Description:

Figure 4 illustrates the resulting network topologies from Figure 3.

Detailed Description:

For example, as shown in Figure 4, if the primary channel A 402 for the mobile devices include communication link C 214 (resulting in direct connections between devicedevices A, B and C) and the secondary channels include the private network 306, then the network topology may change as illustrated in Figure 4 because the only way for device A and device C to communicate using the private network is through device B. While this is a simplified example with only three devices, a significantly larger number of devices may be used, resulting in a variety of different network topology configurations when switching between primary and secondary communication channels

Brief Description:

Figure 5 illustrates a network architecture in which a group of mobile devices and services, including a registration /directory service 502 and a push notification service 504 communicate over a network 110

Detailed Description:

As illustrated in Figure 5, in addition to the CDX service 104, matchmaker service 106 and invitation service 108 (some embodiments of which are described above), one embodiment of the invention can include a registration /directory service 520, a push notification service 522, and a relay service 516. As mentioned above, in one embodiment, the invitation service 108 and/or the matchmaker service 106 can use the registration /directory service 520 to identify registered mobile devices and the push notification service 522 to push data to the mobile devices. In one embodiment, when a mobile device is activated on the network, it registers a “push token” (sometimes referred to as a “notification service account identifier” in the Push Notification Application) with a database maintained by the registration /directory service 520 by associating the push token with a password protected user ID or a telephone number. If the push token is identified in the registration directory (e.g., by performing a query with the user ID), the push notification service 522 can use the push token to transmit push notifications to a mobile device. In one embodiment, the push notification service is the Apple push notification service (“APNS”) designed by the assignee of the present application and described, for example, in the Push Notification Application referenced above. 


Parts List

102

data service

104

CDX service

106

matchmaker service

108

invitation service

110

network

112

mobile device A

114

mobile device B

116

mobile device C

118

mobile device D

120

122

124

126

202

NAT device A

204

NAT device D

206

NAT device B

208

210

212

NAT device C

214

216

302

private network

304

communication link E

306

communication link F

402

primary channel A

404

primary channel B

502

registration /directory service

504

push notification service

506

relay service


Terms/Definitions

device

other children

same entity

first channel

remaining channels

mobile device B

various different prioritization schemes

subset

particular device configuration

User Datagram Protocol

primary and secondary communication channels

addition

direct connections

hand-shaking dialogues

expertise

primary channel A

skill

same data service provider

Transport Layer Security

servers and databases

users

match request

network topology

case

information

wireless hotspots

standard Internet Connectivity Establishment

P2P sessions

Secure Sockets Layer

links

other adults and children

High-Speed Uplink Packet Access

private network connection

network types or network topologies

NAT traversal data

channels

conditions

invention

secondary channels

CDX service

underlying principles

communication link A

two or more channels

network link C

software

primary communication channel

mobile device A

variety

communication link C

“SANs”

backup communication channels

Ethernet-based LAN

communication channels

secondary communication channels

databases store

vast number

devices

highest bandwidth channel

participation

public network

home Wi-Fi network

phone number

user application data

P2P application

service

connection data

order

switching

security

relay service

stateless nature

game

protocols

third channel

their payload header

various mobile devices 120

general network topology

cellular data networks

802.11n home

such a case

internet

NAT device

local network

requests

network link D

matching criteria

small queries

second channel

WiMAX networks

only three devices

local area Ethernet networks

UDP socket connections

communication link B

3 G communication links

device B

payload

larger data service

mobile device C

link management module

mobile device requests

optional additionally-formatted information

lists

networks

servers

Hypertext Transfer Protocol Secure

NATs

same Wi-Fi network

public Wi-Fi network

low-overhead

NAT device B

particular game

detail

NAT ports

external IP address

channel time-to-live value

NAT device D

connection data exchange

participants

particular multi-player game

mobile devices and services

example

data integrity

various game-specific variables

various types

two or more mobile devices

specified set

clients

private network

mobile device D

response

particular packet format

limitation

small “heartbeat” packet

Universal Mobile Telecommunications System

four different communication channels

edge

collaborative P2P sessions

Bluetooth PAN connection

techniques

two devices

local/private network

timing

“UMTS”

simplified example

user requests

collaborative P2P session

packet broadcasting

network architecture

data service

inherently secure network protocol

NAT device A

similar levels

registration /directory service

external services

3G network

different network topology configurations

NAT type data

heartbeat packets

communication link D

primary

method

network link B

highest bandwidth

lightweight network protocol

invitation request

context

match

local area network

channel

communication link F

specific examples

details

primary channel B

public networks

contrast

single network cloud

group

secure manner

network

socket connections

packets

open using heartbeat packets

UDP packets

different components

bandwidth

course

two mobile devices

efficient

Wi-Fi communication links

user

user ID

matchmaker service

network link A

device C

resulting network topologies

port

user account data

prioritization scheme

relatively lightweight network protocol

collaborative session

push notification service

lightweight

central exchange point

only way

public and private IP addresses/ports

significantly larger number

local Wi-Fi networks

variables

private networks

particular set

WiMAX network

match requests

user’s

ports

invitation service

packet reliability

participant

NAT device C

three remaining channels

level

data

gateway

device account data

self-identifying type field

same network configuration

adults

communication link E

connection

session keys

least expensive channel

matched participants

secondary channel

Network Architecture and Hardware Environment


Drawings

Brief Description:

illustrates a networkarchitecture, in accordance with one embodiment

Detailed Description:

Figure 1 illustrates an architecture 100, in accordance with one embodiment. As shown in Figure 1, a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106. A gateway112 may be coupled between the remote networks 102 and a proximate network 108. In the context of the present architecture 100, the networks 104, 106 may each take any form including, but not limited to a LAN, a WAN such as the internet, public switched telephone network (PSTN), internal telephone network, etc. 

In use, the gateway112 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway112 may function as a router, which is capable of directing a given packet of data that arrives at the gateway112, and a switch, which furnishes the actual path in and out of the gateway112 for a given packet

Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway112. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user device(s) 110. User device(s) 110 may also be connected directly through one of the networks 104, 106, 108. Such user device(s) 110 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 110 may also be directly coupled to any of the networks, in one embodiment

A peripheral(s) 116 or series of peripheral(s) 116, e.g., facsimile machines, printers, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network

According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments

In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud.” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used. 

Brief Description:

shows a representative hardware environment that may be associated with the servers and/or clients of Figure 1, in accordance with one embodiment

Detailed Description:

 Figure 2 shows a representative hardware environment associated with a user device(s) 110 and/or server 114 of Figure 1, in accordance with one embodiment. Such figure illustrates a typical hardware configuration of a workstation having a central processing unit 202, such as a microprocessor, and a number of other units interconnected via a system bus 216

The workstation shown in Figure 2 includes a Random Access Memory (RAM) 206, read only memory (ROM) 210, an I/O adapter 204 for connecting peripheral devices such as disk storage units 212 to the bus 216, a user interface adapter 218 for connecting a keyboard 220, a mouse 226, a speaker 224, a microphone 222, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 216, communication adapter 208 for connecting the workstation to a communication network 214 (e.g., a data processing network) and a display adapter 228 for connecting the bus 216 to a display device 230

The workstation may have resident thereon an operating system such as the Microsoft Windows.RTM. operating system (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used. 


Parts List

100

architecture

102

remote networks

104

first remote network

106

second remote network

108

proximate network

110

user device(s)

112

114

data server

116

peripheral(s)

200

item

202

central processing unit

204

I/O adapter

206

Random Access Memory (RAM)

208

communication adapter

210

read only memory (ROM)

212

disk storage units

214

network

216

system bus

218

user interface adapter

220

keyboard

222

microphone

224

speaker

226

mouse

228

display adapter

230

display device


Terms/Definitions

printers

cloud computing

microphone

series

internet connection

gateway

representative hardware environment

typical hardware configuration

many computing systems

UNIX system

display device

processing power

more approaches

keyboard

speaker

computing device/groupware

operating system

central processing unit

user interface adapter

platforms

MICROSOFT WINDOWS environment

software

such user devices

desktop computer

access and distribution

plurality

object oriented programming

virtual systems and/or systems

type

networked and/or local storage units

printer

first remote network

PSTN

number

shared resources

read only memory (ROM)

additional components

systems

telephone network

other techniques

complex applications

network

databases

proximate network

router

microprocessor

systems operating

UNIX OS

data

mouse

operating systems

present architecture

VMWARE software

other user interface devices

system

other type

lap-top computer

communication adapter

object oriented programming methodology

data server

actual path

one embodiment

entrance point

context

peripheral(s)

methods and systems

touch screen

remote networks

Microsoft Windows.RTM

peripheral devices

such figure

data server(s)

resident

cloud

given packet

display adapter

IBM z/OS environment

internet

user device(s)

form

servers

at least one data server

networks

switch

on-demand relationship

workstation

server

MICROSOFT WINDOWS system

digital camera

virtualization and/or emulation

one or more other systems

disk storage units

facsimile machines

communication network

one or more networks

internal telephone network

preferred embodiment

services

present description

other units

system bus

hand-held computer

XML, C, and/or C++ language

component

data processing network

cluster

Random Access Memory (RAM)

architecture

second remote network

other programming languages

approaches

logic

network element

I/O adapter

embodiments

Cloud Environment


Drawings

Brief Description:

illustrates a Cloud computing node 100 in accordance with one embodiment.

Detailed Description:

As shown in Figure 1, computer system/server 102 in Cloud computing node 100 is shown in the form of a general-purpose computing device. The components of computer system/server 102 may include, but are not limited to, one or more processors or processing units 104, a system memory 106 , and a bus 126 that couples various system components including system memory 106 to processor processing units 104.

Bus 126 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.

Computer system/server 102 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 102, and it includes both volatile and non-volatile media, removable and non-removable media.

System memory 106 can include computer system readable media in the form of volatile memory, such as Random access memory (RAM) 108 and/or cache memory 110. Computer system/server 102 may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, a storage system 112 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”) and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 126 by one or more data media interfaces. As will be further depicted and described below, system memory 106 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of the invention.

Program/utility 114 having a set (at least one) of program modules 116 may be stored in system memory 106 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 116 generally carry out the functions and/or methodologies of the invention as described herein.

Computer system/server 102 may also communicate with one or more external devices 122 such as a keyboard, a pointing device, a display 120, etc.; one or more devices that enable a user to interact with computer system/server 102; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 102 to communicate with one or more other computing devices. Such communication can occur via I/O interfaces 118. Still yet, computer system/server 102 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 124. As depicted, network adapter 124 communicates with the other components of computer system/server 102 via bus 126. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 102. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.

Brief Description:

illustrates an item 200 in accordance with one embodiment.

Detailed Description:

Referring now to Figure 2, illustrative cloud computing environment 210 is depicted. As shown, cloud computing environment 210 comprises one or more Cloud computing node 100 with which computing devices such as, for example, personal digital assistant (PDA) or cellular telephone 204, desktop computer , laptop desktop computer 208, and/or automobile computer system 206 communicate. This allows for infrastructure, platforms, and/or software to be offered as services (as described above in Section I) from cloud computing environment 210, so as to not require each client to separately maintain such resources. It is understood that the types of computing devices shown in Figure 2 are intended to be illustrative only and that cloud computing environment 210 can communicate with any type of computerized device over any type of network and/or network/addressable connection (e.g., using a web browser).

Brief Description:

illustrates an item 300 in accordance with one embodiment.

Detailed Description:

Referring now to Figure 3, a set of functional abstraction layers provided by cloud computing environment 210 ( Figure 2) is shown. It should be understood in advance that the components, layers, and functions shown in Figure 3 are intended to be illustrative only, and the invention is not limited thereto. As depicted, the following layers and corresponding functions are provided:

hardware and software layer 308 includes hardware and software components. Examples of hardware components include mainframes. In one example, IBM® zSeries® systems and RISC (Reduced Instruction Set Computer) architecture based servers. In one example, IBM pSeries® systems, IBM xSeries® systems, IBM BladeCenter® systems, storage devices, networks, and networking components. Examples of software components include network application server software. In one example, IBM WebSphere® application server software and database software. In one example, IBM DB2® database software. (IBM, zSeries, pSeries, xSeries, BladeCenter, WebSphere, and DB2 are trademarks of International Business Machines Corporation in the United States, other countries, or both.)

Virtualization layer 306 provides an abstraction layer from which the following exemplary virtual entities may be provided: virtual servers; virtual storage; virtual networks, including virtual private networks; virtual applications; and virtual clients.

Management layer 304 provides the exemplary functions described below. Resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the Cloud computing environment. Metering and Pricing provide cost tracking as resources are utilized within the Cloudcomputing environment, and billing or invoicing for consumption of these resources. In one example, these resources may comprise application software licenses. Security provides identity verification for users and tasks, as well as protection for data and other resources. User portal provides access to the Cloud computing environment for both users and system administrators. Service level management provides Cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment provides pre-arrangement for, and procurement of, Cloud computing resources for which a future requirement is anticipated in accordance with an SLA.

Workloads layer 302 provides functionality for which the Cloudcomputing environment is utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation; software development and lifecycle management; virtual classroom education delivery; data analytics processing; transaction processing; and resource credit management. As mentioned above, all of the foregoing examples described with respect to Figure 3 are illustrative only, and the invention is not limited to these examples.


Parts List

100

Cloud computing node

102

computer system/server

104

processing units

106

system memory

108

Random access memory (RAM)

110

cache memory

112

storage system

114

program/utility

116

program modules

118

I/O interfaces

120

display

122

external devices

124

network adapter

126

bus

200

item

202

laptop

204

cellular telephone

206

automobile computer system

208

desktop computer

210

cloud computing environment

300

item

302

workloads layer

304

management layer

306

virtualization layer

308

hardware and software layer


Terms/Definitions

Broad network access

capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Community Cloud

the Cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.

Private Cloud

the Cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.

Hybrid Cloud

the Cloud infrastructure is a composition of two or more Clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., Cloud bursting for load-balancing between Clouds).

Resource pooling

the provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Cloud Platform as a Service (PaaS)

the capability provided to the consumer is to deploy onto the Cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying Cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

Cloud Software as a Service (SaaS)

the capability provided to the consumer is to use the provider’s applications running on a Cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying Cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Public Cloud

the Cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling Cloud services.

Cloud computing

a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This Cloud model promotes availability and is comprised of at least five characteristics, at least three service models, and at least four deployment models.

On-demand self-service

a consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed, automatically without requiring human interaction with each service’s provider.

Measured service

cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Cloud Infrastructure as a Service (IaaS)

the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying Cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Rapid elasticity

capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Machine to Machine Instant Messaging


Drawings

Brief Description:

Figure 1 is a system diagram illustrating an example of a system for connecting devices, such as internet of things (IoT) devices, other devices or machines, and/or systems, according to some embodiments

Detailed Description:

Figure 1 depicts a system 100 for connecting devices, such as IoT devices, other devices or machines, and/or systems. An IoT device may include any network-connectable device or system having sensing or controlfunctionality. An IoT device may be connectable to a local area network (LAN), a personal area network (PAN), and to a wide area network (WAN). For example, an IoT device may include one or more radios operating using one or more communications protocols that allow the IoT device to connect to one or more LANs or PANs, such as WiFi.TM., ZigBee.TM., Bluetooth.TM., Bluetooth low Energy.TM. (BLE), Infrared Data Association, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and any other suitable protocol that allows connection to a LAN. A LAN may interconnect various network devices and provide the network devices with the ability to connect to a WAN. A router, modem, access point, or other switching mechanism may be used to control and manage the connections to the network devices. A PAN may provide network access for a user’spersonal devices (e.g., a network for connecting devices worn or carried by the user, for connecting devices located in the user’sworkspace, or the like), and may further provide access to other networks, such as a LAN or a WAN. The IoT device may further include one or more radios that allow the IoT device to connect to a WAN, such as the internet, a private cloud network, a public cloud network, or any other network external to a local network. The system 100 may also include third-party messaging services (e.g., facebook, twitter, LinkedIn, SMS, etc.) as well as non-IoT devices and systems

The system 100 may include one or more remote servers, or clouds, that are in communication with other devices or systems via a network, such as the internet, an intranet, a LAN, a PAN, or a WAN. For example, the system 100 includes a common messaging system 102 (or messaging system 102) that supports machine-to-machine instant message exchange in real-time or near real-time. In some embodiments, the messaging system 102 may be an open source machine-to-machine messaging platform, enabling IoT devices, other devices or machines, and/or systems to message or otherwise communicate with any other IoT devices, other devices or machines, and/or systems. The messaging system 102 may be implemented by one or more remote servers and may allow an IoT device, other device or machine, and/or a system to exchange communications or messages with another device or systemregardless of whether the devices or systems are built by different manufacturers, operate using different connection protocols or interfaces, or whether the devices or systems are built with the ability to communicate with a network. While only a single messaging system 102 is shown, one of ordinary skill in the art will appreciate that multiple private or public messaging systems may be implemented using the techniques described herein. One or more remote servers of the messaging system 102 may be connected to a network via the internet and/or other connection platforms (e.g., a WAN and/or a LAN) such that the servers may be accessed from anywhere in the world. The remote servers allow IoT devices, other devices or machines, and/or systems connected to the servers via the network to communicate and exchange messages with other IoT devices, other devices or machines, and/or systems from anywhere in the world. The remote servers may be configured with enough processing power to run an application, store and process data, and/or perform any other computing task. In some examples, the remote servers may provide enough processing power to operate applications running on devices located remotely from the servers and applications running on the servers themselves. 

Messaging system 102 may be configured to supportmultiple connection protocols, such as any suitable machine-to-machine connection protocol. For example, the messaging system 102 may supportconnection protocols such as hypertext transfer protocol (HTTP), websockets, message queuing telemetry transport (MQTT), constrained application protocol (CoAP), Extensible Messaging and Presence Protocol (XMPP), Simple Network Management Protocol (SNMP), AllJoyn, and/or any other suitable connection protocol. The multiple connection protocols supported by the messaging system 102 may be referred to herein as native connection protocols of the messaging system 102. Messaging system 102 may also supportmultiple developer platforms, such as one or more software developer kits (SDKs). For example, the messaging system may supportSDKs such as Node.JS, JavaScript, python, ruby, or any other suitable SDK. The support of multiple developer platforms and protocols provides programmers with the flexibility to customize functions, instructions, and commands for IoT devices, other devices or machines, and/or systems connected to messaging system 102

The messaging system 102 may include a cloud infrastructure system that provides cloud services. In certain embodiments, services provided by the cloud infrastructure of messaging system 102 may include a host of services that are made available to users of the cloud infrastructure system on demand, such as registration, access control, and message routing for users, devices or machines, systems, or components thereof. Services provided by the messaging system 102 can be dynamically scaled to meet the demands of users. The messaging system 102 may comprise one or more computers, servers, and/or systems. In some embodiments, the computers, servers, and/or systems that make up the cloud network of the messaging system 102 are different from a user’sown on-premises computers, servers, and/or systems. For example, the cloud network may host an application, and a user may, via a communication network such as a WAN, LAN, and/or PAN, on demand, order and use the application. In some embodiments, the cloud network of the messaging system 102 may host a Network Address Translation Traversal application to establish a secure connection between the messaging system 102 and a device or machine. A separate secure connection (e.g., using a native protocol of the messaging system 102) may be established by each device or machine for communicating with the messaging system 102. In certain embodiments, the cloud network of the messaging system 102 may include a suite of applications, middleware, or firmware that can be accessed by a user, device or machine, system, or component thereof. 

Upon registering with the messaging system 102, each device or machine, person, and/or system may be assigned a unique identifier and a security token. For example, a device (IoT or other device) or system connected to the messaging system, a person associated with an account or an application that utilizes the messaging system, or the like may be assigned or otherwise provided with a distinct universally unique identifier (UUID) and/or a distinct security token. Each IoT device, other device or machine, system, and/or person using a device must communicate its distinct UUID and security token to the messaging system 102 in order to access the messaging system 102. The messaging system 102 may authenticate the IoT device, other device or machine, system, and/or person using each respective distinct UUID and token. The messaging system 102 may use the UUIDs to process, route, and/or otherwise manage messages and other communications to an appropriate device, person, system, and/or machine. For example, a device may send a message with its UUID and a destination UUID for the device, system, or person to which the message is destined. The messaging system 102 may process, route, and/or otherwise manage the message so that it is received at the appropriate destination

In some embodiments, one or more components or programs of a device or system may also be assigned a unique identifier and a security token. In some cases, the unique identifier and/or token for the components of a device or system may be the same as the unique identifier and/or token of the device or system itself. In some cases, the unique identifier and/or token for a component or program of a device or system may be different from that of the device or system and may be unique only to the component or program. In some embodiments, components of a device or system that may be assigned a unique identifier may include a sensor (e.g., a camera, motion sensor, temperature sensor, accelerometer, gyroscope, or any other available sensor), an output (e.g., a microphone, siren, display, light, tactile output, or any other available output), a third-party messaging service that the device or system is able to run, or any other component of a device or system that can be identified, accessed, and/or controlled. 

Messaging system 102 may further be configured to interact with any application programming interface (API). Each API may also be assigned or otherwise provided with a unique identifier (e.g., a distinct UUID) and/or a security token. Assigning APIs with a unique identifier enables messaging system 102 to receive instructions from and provide instructions to any IoT device, other device or machine, and/or system that is connected to the messaging system 102. Further details describing how the messaging system 102 can interact with any API of any device or system are described herein. By being able to interact with any API, messaging system 102 may control the functionality of all components of a registered IoT device, other device or machine, and/or system that are accessible by the messaging system 102. In some embodiments, messaging system 102 may be configured such that a single message transmitted by messaging system 102 may be communicated to multiple devices and/or systems having different APIs. Accessible IoT devices, other devices or machines, and/or systems include any device that has been registered with messaging system 102 and that has been assigned a unique identifier and/or a security token. For example, a user may purchase an IoT device. The user must register the IoT device with the messaging system 102, and may be assigned a UUID and security token by the messaging system 102 to make the IoT device accessible to the messaging system 102

Using the common messaging system 102, people, devices, systems, and/or components thereof that have assigned UUIDs can query and communicate with a network of other people, devices, system, and components thereof that have assigned UUIDs and that meet specific search criteria. For example, a device may query the common messaging system 102 searching for a specific type of devices that are located in a particular area, and may receive a list of UUIDs for devices that meet the search criteria. The device may then send a message with a destination UUID assigned to the destination device to which the device wants to send a message

In some embodiments, messaging system 102 may also detect, connect, and/or communicate with other servers, allowing messaging system 102 to routemessages to IoT devices, other devices or machines, and/or systems on the other servers via a server-to-server connection. Server-to-server communications may include connections used to transfer data from one server to another server. For example, a user may use multiple cloud servers to store different types of information. A user may want to transfer data from a first server of a first cloud network to a second server of a second cloud network. A server-to-server communication allows the user to directly transfer or otherwise share this information with the second server. As another example, the messaging system 102 supports inter-cloud communications to allow people, devices or machines, systems, or components thereof to routemessages across clouds to other people, devices or machines, systems, or components thereof on other clouds. For instance, a device connected to a private or public cloud network may send a message to another device connected to another private or public cloud

IoT devices, other devices or machines, and/or systems may be able to connect with the messaging system 102 in several ways. In some embodiments, devices and systems may communicate with the messaging system 102 using a messaging system gateway. For example, IoT devices, other devices or machines, and/or systems may communicate with the messaging system 102 using messaging system gateway or hub 114. The messaging system gateway 114 may be connected to a same LAN as the devices that use the messaging system gateway 114. For example, the messaging system gateway 114 may be installed at a location, such as a home, office, a sports venue, an outside environment (e.g., a park, a city, or the like), or any other suitable location. In some embodiments, the messaging system gateway 114 includes an instance of messaging system software that is configured to interact with the messaging system 102. In some cases, the messaging system gateway 114 may be run on an operating system, such as, but not limited to, Linux.TM., Mac.TM. OS, and/or Windows.TM.. In some embodiments, a messaging system gateway 114 may be a standalone physical device, such as a wireless router or modem, which runs the gateway software that connects to the messaging system 102 using a WAN. In some embodiments, a messaging system gateway 114 may be integrated into an IoT device, other device or machine, and/or system by installing the gateway software onto the IoT device, other device or machine, and/or system. For example, the messaging system gateway 114 may be run on computing devices such as a Raspberry Pi, a home and/or office computer, Intel.TM. galileo, Beagle Bones, yuns, and/or other suitable computing device

Regardless of physical form, the messaging system gateway 114 may operate as an intermediary between the messaging system 102 and the devices or systems that use the messaging system gateway 114. For example, IoT devices, other devices or machines, and/or systems may be connected to messaging system gateway 114, which then links the IoT devices, other devices or machines, and/or systems to the messaging system 102 in real-time. The connection of a device or system to the messaging system 102 via the messaging system gateway 114 allows connected IoT devices, other devices or machines, and/or systems to communicate with one another in real-time. IoT devices, other devices or machines, and/or systems may be connected to messaging system gateway 114 using one or more native connection protocols of the IoT device, other device or machine, and/or system. The protocols may include, but are not limited to, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), WiFi, ZigBee, bluetooth low energy (BLE), HTTP, websockets, MQTT, CoAP, XMPP, SNMP, AllJoyn, and/or any other suitable connection protocol. In some embodiments, messaging system gateway 114 may broadcast a private network signal such that registered devices and systems may securely connect to the messaging system gateway 114 and to the messaging system 102. Devices and systems that do not have access to the messaging system gateway 114 and messaging system 102 may be unable to process the private network signal

In some embodiments, messaging system gateway 114 is on a LAN side of a firewall, such as a network address translations (NAT) firewall implemented using a router, or other suitable firewall. In some cases, the messaging system gateway 114 may use websockets to connect to the messaging system 102. The connection between websockets of the messaging system gateway 114 and the messaging system 102 may include a bi-directional persistent connection. The bi-directional persistent connection may auto-reconnect as WAN (e.g., internet, or the like) connectivity becomes available. By locating the messaging system gateway 114 inside of the firewall, only communications to and from the messaging system gateway 114 have to be granted access to the firewall. Accordingly, the messaging system 102 and any system and/or device connected to the messaging system gateway 114 may communicate through the firewall via the messaging system gateway 114. The messaging system gateway 114 may be used by a person or business to connect various IoT devices, other devices or machines, and/or systems to the messaging system 102, serving as a secure connection for communicating with messaging system 102 much like a personal firewall

Devices and systems may also be able to communicate with the messaging system 102 using a mobile messaging system gateway that is installed on a mobile device. For example, IoT devices, other devices or machines, and/or systems may be able to connect with the messaging system 102 using a mobile gateway 118. The mobile gateway 118 is similar to a messaging system gateway 114, but instead is installed and operated on a mobile device. For example, mobile gateway 118 may be installed on a mobile phone, tablet, laptop, wearable device, or other suitable mobile device. The mobile gateway 118 may allow the mobile phone to connect to the messaging system 102. The mobile gateway 118 may access all sensors on the mobile device. For example, geolocation sensor data, compass headings, and/or accelerometer data of a mobile phone may be provided to the messaging system 102 through mobile gateway 118. In some embodiments, the mobile gateway 118 may be installed in wearable technology, such as pedometers, headsets, watches, and the like, as well as in Bluetooth.TM. low-energydevices. In some embodiments, the mobile gateway 118 may also provide a personal area network (PAN) and may allow other devices that are connectable to the mobile device to connect to the messaging system 102 via the mobile gateway 118. For example, one or more devices that do not have an Internet Protocol address and that are not able to connect to a LAN (e.g., a WiFi network or the like) may connect to the mobile gateway 118 using a wired interface or a short-range communication protocol interface, such as bluetooth, BLE, ZigBee, near field communication (NFC), radio frequency (RF), infrared (IR), or any other suitable communication protocol. These devices may then connect to messaging system 102 through the mobile gateway 118 of the mobile device. The mobile gateway 118 may operate to exchange communications between the devices and the messaging system 102. Devices that do not have an Internet Protocol address and that are not able to connect to a local area network may include wearable technology or other similar devices that only have access to a PAN. 

In some embodiments, an IoT device, other device or machine, and/or system may connect with messaging system 102, the messaging system gateway 114, and/or the mobile gateway 118 using a universal messaging system interface 116 that is programmed into the device or system. The built-in universal messaging system interface 116 (or universal interface 116) allows a device or system to perform operations that native firmware of the device or system does not allow it to perform. For example, the messaging system interface 116 may override the native firmware of a device to allow the device to perform various operations that are outside of the functionality of the native firmware. In some embodiments, the messaging system interface 116 may be installed on a device that does not have the ability to communicate with other devices using one or more connection protocols. In such embodiments, the messaging system interface 116 may provide the device with the capability to use one or more connection protocols. The messaging system interface 116 may accessone or more sensors, inputs, outputs, or programs on the device or system in order to perform various operations. For example, the messaging system interface 116 may have access to and control a geolocation sensor, a compass, a camera, a motion sensor, a temperature sensor, an accelerometer, a gyroscope, a graphical interface input, a keypad input, a touchscreen input, a microphone, a siren, a display, a light, a tactile output, a third-party messaging service that the device or system is able to run, or any other component of a device or system that can be identified, accessed, and/or controlled. 

In some embodiments, the built-in universal messaging system interface 116 may include an operating system that allows the device to communicate with the messaging system 102. Messaging system interface 116 may be installed on an IoT device, other device or machine, and/or system, such as a computing device. For example, the messaging system interface 16 may be installed on a Raspberry Pi board, an arduino board, a microcontroller, a minicomputer, or any other suitable computing device

In some embodiments, a device or system running the messaging system interface 116 may connect directly to messaging system 102. In some embodiments, a device or system running the messaging system interface 116 may connect to the messaging system 102 via the messaging system gateway 114 or the mobile gateway 118. The messaging system interface 116 run by the device or system may be assigned a UUID and a token. The messaging system interface 116 may connect to the messaging system 102 using the assigned UUID and token, and may await further instructions from the messaging system 102. In some embodiments, the messaging system 102 may act as a compute server that controls the messaging system interface 116. For example, messaging system 102 may activate and/or deactivate pins of the computing device running the messaging system interface 116, request sensor data from the computing device, and/or cause the messaging system interface 116 to perform other functions related to the computing device. In some embodiments, the messaging system interface 116 can be connected to a gateway (e.g., messaging system gateway 114 or mobile gateway 118), and the gateway may act as a compute server that controls the messaging system interface 116 in a similar manner as described above. In some embodiments, messaging system interface 116 may be a mobile operating system or application that is able to run on mobile device operating systems, such as iOS and Android.TM. operating systems

Information from messaging system 102, including informationtransmitted to messaging system 102 by messaging system gateway 114, mobile gateway 118, and/or messaging system interface 116, may be transmitted to one or more data storage systems. For example, information about IoT devices, other devices or machines, and/or systems registered with the messaging system 102 may be transmitted to device directory 104 for storage. The information about the IoT device, other device or machine, and/or system may be stored in device directory 104 upon registration of the IoT device, other device or machine, and/or system. For example, information related to when the IoT device, other device or machine, and/or system comes online or offline may be stored in device directory 104

In some embodiments, the device directory 104 may maintain various lists, such as whitelists and/or blacklists, that are associated with a unique identifier (e.g., a UUID) assigned a person, an IoT device, other device or machine, system, and/or component thereof. The use of whitelists and blacklists ensures that devices, systems, and users only have access to those UUIDs of IoT devices, other devices or machines, and/or systems for which permission has been granted. In one example, the device directory 104 may maintain a whitelist for a UUID assigned to a device. The whitelist includes a list or array of UUIDs assigned to devices or systems that are allowed access the device at various levels of access. For example, four levels of access to the device may be granted to other devices or systems, and a separate list or array may be maintained for each level of access. In this example, the whitelist for the device’sUUID may include a list or array that includes UUIDs of devices or systems that may discover the device, a list or array of UUIDs of devices or systems that may send a message to the device, a list or array of UUIDs of devices or systems that may receive a message from the device, and/or a list or array of UUIDs of devices or systems that may configure the device. Other levels of access may also be granted, such as the ability of another device or system to subscribe to the device

In another example, the device directory 104 may also maintain a blacklist for a UUID assigned to the device. The blacklist includes a list or array of UUIDs assigned to devices or systems that are denied access to the device at the various levels of access. In this example, the blacklist for the device’sUUID may include a list or array that includes UUIDs of devices or systems that cannot discover the device, a list or array of UUIDs of devices or systems that cannot send a message to the device, a list or array of UUIDs of devices or systems that cannot receive a message from the device, and/or a list or array of UUIDs of devices or systems that cannot configure the device

In some embodiments, the device directory 104 is queriable, such that a device, system, or user may be provided with a list and/or array of IoT devices, other devices or machines, and/or systems that fit requested search criteria. The messaging system 102 may access the device directory 104 upon receiving a query from a device, system, or user. Upon polling the device directory 104 according to the criteria specified in a query, the messaging system 102 may provide a device with a list or array of UUIDs assigned to IoT devices, other devices or machines, and/or systems that are currently online and that the device has access to according to the device’sUUID and/or security token. The use of the whitelists and/or blacklists operates as a security feature, ensuring that devices, systems, and users only have access to other devicesdevices, systems, and users to which permission has been granted. 

In some embodiments, sensor data from sensors of registered IoT devices, other devices or machines, and/or systems may be transmitted to sensor data storage 106. The sensor data may be streamed from a registered IoT device, other device or machine, and/or system through messaging system 102 in real-time. Sensor data storage 106 is queriable such that a user may poll sensor data storage 106 to receive data from specified sensors during a specified time period. A user may also be able to query the sensor data storage 106 for all available data from one or more sensors. In some embodiments, information from sensor data storage 106, as well as additional information from messaging system 102, may be transmitted to an analytics database 108. In some embodiments, analytics database 108 may not be queried by a user of the system 100. In other embodiments, analytics database 106 may be queried by a user of the system 100. The information stored in analytics database 108 may be accessible via a platform network 110

In some embodiments, multiple servers or other systems may each operate an instance of software that includes the messaging system 102, thus creating multiple cloud servers and/or instances of messaging systems 102. In some embodiments, a particular instance of messaging system 102 may have its own UUID that allows the instance of messaging system 102 to connect to another instance of messaging system 102 to form a mesh network of messaging systems. Other networks and devices or machines may also be part of the mesh network, such as LANs and PANs and the devices or machines that are interconnected using the LANs and PANs. Each of the LANs and PANs can have their own unique UUID and/or token registered with the messaging system 102. The LANs and PANs are addressable using their unique UUID, and can also address other UUIDs around the world. Such a mesh network may allow messages and other payloads to be routed between devices across messaging systems 102. Accordingly, the messaging system 102 supports inter-cloud communications to allow people, devices or machines, systems, or components thereof to routemessages across clouds to other people, devices or machines, systems, or components thereof on other clouds. Each of the cloud networks may run an instance of the messaging system 102. For instance, a device connected to a private or public cloud network may send a message to another device connected to another private or public cloud

As described above, each person, device or machine, system (e.g., cloud network running an instance of the messaging system, a LAN, a PAN, or the like), or components thereof that is registered with the messaging system 102 is assigned a UUID. Each person, device or machine, system, or components thereof can be referenced by the messaging system using its UUID. Each of the UUIDs can discover other UUIDs (e.g., clouds, other networks, people, or devices or machines) using one or more queries, such as using multicast Domain Name System (MDNS) or API queries. In some embodiments, a UUID can connect to multiple networks thus forming a global mesh network including different networks (e.g., multiple cloud networks, LANs, PANs, or a combination of cloud networks, LANs, and/or PANs). A cloud network running an instance of messaging system may also be assigned a UUID and can routemessages across cloud networks via inter-cloud communications using a routing paradigm. For example, a cloud network can send a message across cloud networks by sending the message with a route UUID.sub.–1/UUID.sub.–2/UUID.sub.–3/UUID.sub.–4, with each UUID be assigned to a different cloud network. In some embodiments, the mesh network may route the message based on known connections

Platform network 110 may include one or more analytics engines that may process the information received from the analytics database 108. The analytics engines may aggregate the received information, detect trends, and/or perform other analytics on the information. Platform network 110 may be communicatively coupled with a number of APIs 112 that are used to create, manage, identify, and/or communicate with functions of different IoT devices, other devices or machines, and/or systems. APIs may include, for example, sales analyticsAPIs, social mediaaccount and other third-party messaging accountAPIs, stock quoteAPIs, weather service APIs, other data APIs, mobile application APIs, and any other suitable API. For example, a Facebook.TM. or other social media message may use a messaging API to send SMS messages. Platform network 110 may use the messaging API to deliver a payload to a device or system configured to display an SMS message. A light API may be provided by a manufacturer of “smart” light bulbs. The platform network 110 would then use this light API to provide an output to turn a light bulb connected to the platform network 110 on or off. Platform network 110 is also in communication with messaging system 102 using the APIs of messaging system 102. Platform network 110 may interact with the IoT devices, other devices or machines, and/or systems connected through the messaging system 102 using UUIDs and/or security tokens

The UUIDs and/or security tokens may be issued by the messaging system 102 and/or the platform network 110. In some embodiments, a user may register systems and/or devices with the messaging system 102. The platform network 110 may import or otherwise utilize any UUIDs and/or tokens issued by the messaging system 102 during the registration. In some embodiments, a user may register devices and/or systems with the platform network 110. The platform network 110 may issue UUIDs and security tokens to IoT devices, other devices or machines, and/or systems upon registration of the IoT device, other device or machine, and/or system. The UUIDs and security tokens are used to access the messaging system 102, as described above. In some embodiments, a user may register devices and/or systems with both the messaging system 102 and the platform network 110. Eithermessaging system 102 or platform network 110 may issue UUIDs and/or tokens. Registration with the non-issuing system or network creates a link or other association with the issued UUIDs and/or security tokens

Platform network 110 may operate an application or other program that provides a designer graphical interface that allows a user to create a control system or flow. The designer graphical interface may allow the user to create a control system by dragging and dropping blocks that represent various devices and/or systems of the control system, inputs and/or outputs from the various devices and/or systems, and/or functions for controlling the devices and/or systems. Any IoT device, other device or machine, and/or system that is registered with platform network 110 may be configured to receive or transmit a message to any other IoT device, other device or machine, and/or system that is registered with platform network 110 using an appropriate control system designed using the designer graphical interface. Messages may be transmitted from one device or system to controloperation of another device or system. For example, the platform network 110 may run control systems continuously, such that an input from a device or system may automatically cause an event to occur in a different location and/or by a different device or system. Such functionality, along with access to the data from analytics database 108, enables the platform network 110 to monitor a performance, behavior, and/or state of any IoT device, other device or machine, and/or system within the control system and to send a resulting message or payload to any other IoT device, other device or machine, and/or system in the control system based on the monitored performance, behavior, and/or state. In another example, the platform network 110 may run a control system designed using the designer graphical interface upon receiving a command, such as from a user or from another device or system. In some embodiments, the designer graphical interface operated by the platform network 110 may access any IoT device, other device or machine, and/or system connected to messaging system 102, including IoT devices, other devices or machines, and/or systems connected using the messaging system gateway 114, messaging system interface 116, and/or mobile gateway 118. This connection enables control systems created using the designer graphical interface to controloutput functions of devices and/or systems registered with the messaging system 102. For example, real-time monitoring of data at a remote location, such as performance of a machine or system, or of a person’shealth condition may be performed by the platform network 110

The platform network 110 may also automatically provide messages or other outputs, including commands, to any of the registered IoT devices, other devices or machines, and/or systems based on processes performed on information received from IoT devices, other device or machine, and/or system. For example, sensor data may be received from an IoT device and processed by analytics systems of the platform network 110. Using artificial intelligence and/or machine learning within the platform network 110, the processed sensor data may be used to provide commands to another system or device connected to platform network 110

In some embodiments, platform network 110 may be connected with messaging system 102 through a web-based design interface 120. Web-based design interface 120 may include similar functionality as the designer of platform network 110, but operates as a web-based application. Users may design control systems and flows on web-based design interface 120 and test the control systems prior to fully deploying a control system into platform network 110. Users may have access to all IoT devices, other devices or machines, and/or systems associated with messaging system 102 and/or platform network 110, although the processing functions available using the web-based design interface 120 are limited to those provided by a web browser. Web-based design interface 120 may act as a developer design tool that functions through the capabilities of the web browser. A user may then import the control system into platform network 110 for continuous operation of the control system

Devices or machines, systems, or components thereof that are each assigned individual UUIDs may continuously stream data (e.g., sensor data) to the messaging system 102. The streamed data may be stored in device directory 104, sensor data storage 106, and/or to the analytics database 108. The streamed data from the UUIDs may be reacted upon in real-time. As described in more detail below, UUIDs or user control systemcontrol system or flow created using the platform network 110 can subscribe to other UUIDs streaming the data. Based on thresholds within the data, frequency of occurrence of certain data, or the occurrence of the data itself, events can be created that trigger messages to be exchanged between devices or machines and/or systems. For example, a photo sensor with an assigned UUID that senses a change in light may stream sensor data to the messaging system 102, and a control system created using the platform network 110 may indicate that anytime a change in light occurs, a light with an assigned UUID should be turned on or off. The control system may subscribe to the UUID of the sensor so that it can detect when a change in light occurs. When the control system senses a light change, it may trigger a message to be sent to the light in order to cause the light to changestates (e.g., on or off). In some examples, the sensor data and message exchanges or other transactions may be streamed into the analytics database 108 for real-time, near real-time, and/or offline data analytics

In some embodiments, UUIDs can subscribe to other UUIDs with or without tokens (provided security permissions allow it). Subscribing with the device’sUUID with a token allow a person, device, or system to “spy” on the device’sinbound and outbound communications in an eavesdropping mode. Subscribing without the device’stoken may only allow the subscribing device access to the messages broadcast by the device (provided security permissions allow it). 

Brief Description:

Figure 2 is a system diagram illustrating an example of a system for exchanging machine-to-machine instant messages between systems and devices or machines, according to some embodiments

Detailed Description:

Figure 2 illustrates an example of a system 200 implementing various components of Figure 1. The system 200 allows the real-time exchange of machine-to-machine instant messages between devices and/or systems. System 200 includes a messaging system 202 and a messaging system 204. The messaging systems 202 and 204 may be similar to the messaging system 102 described above with respect to Figure 1, and may perform one or more of the functions described above. Either of the messaging systems 202 and 204 may be part of a private or a public cloud network. For example, messaging system 202 may be part of a public cloud network with which any device, system, or user may be registered. Messaging system 204 may be part of a private cloud network that is restricted for use by only select devices, systems, or users. For example, the private messaging system 204 may be restricted for use by employees and affiliates of a particular company

The system 200 may further include one or more messaging system interfaces implemented by one or more machines or devices. For example, the system 200 includes messaging system interface 208, messaging system interface 210, messaging system interface 212, and messaging system interface 214. The messaging system interfaces 208, 210, 212, 214 may be similar to the messaging system interface 116 described above with respect to Figure 1, and may perform one or more of the functions described above. The messaging system interfaces 208, 210, 212, 214 may each be installed on a separate computing device and integrated with a separate machine or device. For example, the messaging system interfaces 208, 210, 212, or 214 may be installed on a computing device, such as a Raspberry Pi board, an arduino board, a microcontroller, a minicomputer, or any other suitable computing device. The computing devices with the installed messaging system interfaces 208, 210, 212, or 214 may then be integrated with separate devices or machines. Accordingly, four machines may each be integrated with a computing device installed with one of the messaging system interfaces 208, 210, 212, and 214. Devices or machines can include any electronic device, including sensors and consumer products such as light bulbs, thermostats, home automation devices, smoke alarms, burglary alarms, an accelerator or other electronic component of a vehicle, a display device, a printer, or any other electronic device

The system 200 may further include one or more messaging system gateways, including a messaging system gateway 206 and a mobile gateway (not shown). The messaging system gateway 206 may be similar to the messaging system gateway 114 described above with respect to Figure 1, and may perform one or more of the functions described above. In some embodiments, the messaging system gateway 206 may include a mobile gateway, similar to the mobile gateway 118 described above with respect to Figure 1. The messaging system gateway 114 may be connected to a local area network (LAN) and/or to a personal area network (PAN). 

Any machine that has been assigned a unique identifier (e.g., a UUID) by the messaging system 202 or messaging system 204 and that has the ability to connect to a wide area network (WAN) (e.g., an IoT device) can connect directly to the messaging system 202. In some embodiments, only the messaging system 202issuesunique identifiers to people, machines or devices, systems, or components thereof. In such embodiments, the messaging system 204 may use the unique identifiers that are issued by the messaging system 202. In some embodiments, the messaging systems 202 and 204 are independent messaging systems, and each messaging systems 202 and 204 may issue different unique identifiers. Machines with or without unique identifiers can connect to the messaging system gateway 206. A machine with an assigned unique identifier and the appropriate access level permission can query the system 200 from anywhere in the world for other machines that meet a specific search criteria. The machine may message the other machines via the messaging system 202

The messaging systems 202 and 204 supportinter-cloud communications, allowing machines to routemessages across the messaging systems 202 and 204 to devices and sub-devices on other cloud networks. For example, the machine running the messaging system interface 214 is connected to the private messaging system 204 cloud network, and can send a message to a machine running the messaging system interface 208 that is connected to the public messaging system 202. The machine running the messaging system interface 214 may be located in New York, N.Y., and the machine running the messaging system interface 208 may be located in london, england. The machine running the messaging system interface 214 may send the message to a route of UUIDs corresponding to the path that the message must follow in order to reach the machine running the messaging system interface 208. The route may be included in a routing list that is included in the message (e.g., in a field of the message, such as a header field). For example, the routing list for the message may include a route of UUIDs that includes UUID_MSGSYS204/UUID_MSGSYS202/UUID_MSGSYSINT208. The messaging system 202 may assign the UUID_MSGSYS204 to the messaging system 204, the UUID_MSGSYS202 to itself, and the UUID_MSGSYSINT208 to the machine running the messaging system interface 208. The network servers of the messaging systems 202 and 204, the messaging system gateway 206, and the machines or devices running the messaging system interfaces 208, 210, 212, 214, if included in the route, may each remove their UUID from the routing list and pass the message on to the next UUID in the list until the message arrives at its destination. The same routing technique may be used to send messages within the same messaging system cloud network or across multiple messaging system cloud networks

In some embodiments, devices or machines can also communicate with other devices or machines via one or more peer-to-peer sockets rather than going through a messaging system 202 or 204. For example, the machine running the messaging system interface 208 may directly communicate with the machine running the messaging system interface 210. One or more dynamic routing protocols may be used by the machines when exchanging communications via a peer-to-peer connection. In some embodiments, devices or machines may discover and be introduced to other devices or machines using the messaging system 202. After being introduced by the messaging system 202, the devices or machines may then begin a peer-to-peer communications session directly provided they have the proper security permissions. For example, the machine running the messaging system interface 208 may query the messaging system 202 for machines that meets certain criteria (e.g., Philips Hue.TM. light bulbs in a particular location, or other suitable search criteria). The messaging system 202 may check the security permissions of the machine running the messaging system interface 208, and may return a list or array of UUIDs assigned to machines that meet the criteria and for which the machine running the messaging system interface 208 has permission to access. One of the machines on the list or array may include the machine running the messaging system interface 210. The machines running messaging system interfaces 208 and 210 may then begin a peer-to-peer communications session to directly exchange messages

In some embodiments, the messaging system 202 may store various properties of each registered person, machine or device, system, or component thereof that has an assigned UUID. Each registered person, machine or device, system, or component thereof may have a registry store in which the properties may be stored. For example, the registry store for each registered person, machine or device, system, or component thereof may be stored in a device directory similar to the device directory 104 described above. The properties can be anything that describes the person, machine or device, systemdevice, system, or component thereof, including status or state (e.g., on, off, idle, sleeping, or the like), type, color, features, connection protocols, geolocation, or the like. For example, one or more servers of the messaging system 202 may track how each registered machine or device is connected to the messaging system 202 or to a messaging system gateway (e.g., gateway 206). The messaging system 202 may also track the geolocation of each device or machine. For example, the messaging system 202 may store in a registry store for each machine or device the connection protocol used by each machine or device and the geolocation of each machine or device at a given point in time. The geolocation may be stored as a set of coordinates (e.g., global positioning system (GPS) coordinates, latitude-longitude coordinates, or the like). The connection protocol and the geolocation may be updated each timeeitherchanges. For example, if a machine or device changes locations or connects with the messaging system using a different connection protocol, the messaging system 202 may update the machine’sregistry store with the updated connection protocol and/or geolocation. In some embodiments, the messaging system 202 can store all of the connection protocols for which a machine or device is configured to operate. The properties may be updated in real-time as the change occurs, or in partial real-time at different points in time (e.g., every 1 minute, 2 minutes, 5 minutes, 30 minutes, 1 hour, or other appropriate period of time). 

The messaging systems 202 and 204 operate using one or more native connection protocols. For example, the messaging systems 202 and 204 may natively recognize an HTTP connection protocol, a websockets connection protocol, a MQTT connection protocol, a CoAP connection protocol, an XMPP connection protocol, an SNMP connection protocol, an AllJoyn connection protocol, or any other appropriate connection protocol. One of ordinary skill in the art will recognize that the messaging systems 202 and 204 may natively operate using any other appropriate machine-to-machine connection protocol. Other protocols may be added to the messaging system 202 or 204 over time as the protocols become more universally used. 

The messaging system 202 may also include a universal application programming interface that is available for use by all of the native connection protocols of the messaging system 202. The universal application programming interface may be used to interface Internet Things (IoT) devices that use different proprietary application programming interfaces. The universal application programming interface allows the messaging system 202 to avoid having to go through each machine’sproprietary cloud network and proprietary application programming interface to facilitate message exchange between machines that use different proprietary protocols to communicate. Without a universal application programming interface, a server may receive a message from a first device that is destined for a second device. The first device may use a first proprietary connection protocol and application programming interface and the second device may use a second proprietary connection protocol and application programming interface. The server would have to send the message to the proprietary cloud server with which the second device is registered. The proprietary cloud server would then access the application programming interface used by the second device, and send the message to the second device. Such a procedure of sending the messages to a different proprietary cloud network using different application programming interface requests for each different proprietary protocol used adds latency to the message transport from the first device to the second device. Using the universal application programming interface, the messaging system 202 can receive messages from the first device, and can directly send the messages to the second device (or to a local gateway to which the second device is connected via a LAN or PAN) using a single application programming interface request

The universal application programming interface supports various commands. For example, the universal application programming interface allows users, machines or devices, systems, or components thereof to get a status of the messaging system 202 (e.g., online, offline, temporarily offline, or the like). The universal application programming interface also allows a machine or device to be registered with the messaging system 202. Upon receiving a registration request, the universal application programming interface may return a UUID and a security token to the registrant. The universal application programming interface also specifies how queries from users, machines or devices, systems, or components thereof are handled. For example, the universal application programming interface may allow the messaging system 202 to return a list of UUIDs that correspond to a query for different usersusers, machines or devices, systems, or components thereof. As another example, the universal application programming interface may allow the messaging system 202 to return information related to a specific machine or device in response to a query for information relating to the machine or device. The universal application programming interface also describes how to update features of (e.g., change a location, connection protocol, color, or other feature) or control (e.g., turn on/off, move to a different location, or the like) registered machines or devices in response to requests from users, machines or devices, systems, or components thereof to make the changes (and that have permission to do so). One of ordinary skill in the art will appreciate that the universal application programming interface can specify to the servers of the messaging system 202 how to perform various generic functions that relate to any connected usersusers, machines or devices, systems, or components thereof. 

One or more computing devices of the messaging system 202 can routemessages to and from any connected machine or device in any supported protocol (whether native or transformed by a plug-in, as described below). The computing devices may include one or more network servers. The messaging system 202 may translate between the different native connection protocols to facilitate message exchanges between machines or devices that operate using different connection protocols. For example, the common messaging system may translate a received communication that is in a first native connection protocol to a second native connection protocol before sending the communication to a machine or device that only operates using the second native connection protocol or that operates using a connection protocol that is different than the first and second native connection protocols (in which case a plug-in would be needed to convert from the second native connection protocol to the protocol that the machineuses). In one example, a MQTT device can use the messaging system 202 to communicate a message to a CoAP device, a websocket-powered device, or a web page via HTTP. The messaging system 202 can thus interpret or translate the message to the destination device’sconnected or preferred connection protocol

In one example, a computing device may be used for interfacing Internet Things (IoT) devices that use different connection protocols. For example, the computing device may be a network server of the messaging system 202, and may include one or more data processors. The computing device may also include a receiver. A first IoT device may transmit a communication destined for a second IoT device across a WAN or to a messaging system gateway (e.g., messaging system gateway 206) via a LAN and/or PAN. The first IoT device may include a messaging system interface. The receiver of the computing device may receive the communication from the first IoT device. The first IoT device may be communicatively connected to the computing device using a first connection protocol and the communication may be received using the first connection protocol. The first connection protocol may be a connection protocol that is native to the computing device of the messaging system 202. For example, the first connection protocol may be a MQTT connection protocol

The computing device may include a non-transitory computer-readable storage medium containing instructions that, when executed on the one or more data processors, cause the one or more processors to perform various operations. The computing device may determine a second IoT device to which the communication is intended to be transmitted. The computing device may also determine a second connection protocol used by the second IoT device. For example, the first IoT device may be assigned a first UUID, and the second IoT device may be assigned a second UUID. The received communication may include the second UUID (e.g., in a field of a communication packet). The computing device may determine the identity of the second IoT device and the second connection protocol used by the second IoT device based on the second UUID. For example, the computing device may refer to a registry store (e.g., in the device directory of messaging system 202) that is associated with the second UUID in order to determine the connection protocol used by the second IoT device. The computing device may then translate the communication to the second connection protocol that corresponds to the protocol with which the second IoT device is connected to the computing device of the messaging system 202. The first connection protocol is different than the second connection protocol. For example, the second connection protocol may be an HTTP connection protocol. In some embodiments, the first IoT device may not be configured to communicate using the second connection protocol, and the second IoT device may not be configured to communicate using the first connection protocol

The computing device may further include a transmitter for transmitting the communication to the second IoT device that is communicatively connected to the computing device using the second connection protocol. The communication is transmitted using the second connection protocol

In some embodiments, the receiver is configured to and may receive a response to the communication from the second IoT device. The response may be received using the second connection protocol with which the second IoT device is communicatively connected to the computing device. The computing device may then translate the response to the first connection protocol with which the first IoT device is communicatively connected to the computing device. The transmitter is configured to and may transmit the response to the first IoT device using the first connection protocol

In some embodiments, the receiver is configured to and may receive a second communication from a third-party messaging account. The transmitter is configured to and may transmit the second communication to the second IoT device. The second communication received from the third-party messaging account controls a function of the second IoT device. For example, the third-party messaging account may be an account of a third-party messaging service, such as Facebook.TM., Twitter.TM., LinkedIn.TM., SMS, or any other messaging service that allows a user of a device to send and receive messages using a registered account. In some embodiments, the second communication includes a message and a tag. The tag identifies a destination program of the second device, such as an application or program that enables a machine or device to send messages using the third-party messaging accounts. For example, the tag may identify an identifier of the application or program. Upon being received by the application or program of the second device, the destination application or program may be opened and the tagged data may be entered into the application or program to activate the indicated function

In some embodiments, the messaging system gateway 206 may include one or more messaging system plug-ins. In some embodiments, one or more plug-ins may be installed on one or more computing devices, such as a microcontroller, a minicomputer, or any other suitable computing device in the messaging system gateway 206. In some embodiments, one or more plug-ins may be added to one or more existing programs of the messaging system gateway 206. In some examples, each plug-in may include program code that knows how to interact with the messaging system gateway 206. For example, a plug-in may include a JavaScript piece of code. In some examples, when sending messages from a machine to the messaging system 202, a messaging system plug-in may translate or convert one or more connection protocols that are used by the machine and that are not native to the messaging system 202 to a native connection protocol of the messaging system 202. When sending messages from the messaging system 202 to the machine, the messaging system plug-in of the gateway 206 may also translate the native connection protocols of the messaging system 202 to the protocols used by the machine. In some examples, a messaging system plug-in may also translate or map one or more proprietary application programming interfaces used by a machine to a universal application programming interface of the messaging system 202. Similar plug-ins may be used in a mobile gateway (e.g., mobile gateway 118), and may perform similar functions as those described herein. For example, a mobile gateway may allow a user to interconnect various devices worn or carried by the user via a PAN provided by the mobile gateway, as described above. One or more plug-ins of the mobile gateway may allow the devices to communicate with the messaging system 202, similar to the plug-ins of the messaging system gateway 206

To perform the translation, a plug-in may define a message schema that corresponds to the format of the messages required to communicate with a particular machine or device. For example, a message with a command from the messaging system 202 may instruct one or more machines to perform a function, such as to turn off all lights in a room. The message may be transmitted in a general format of the universal application programming interface that is not specific to the proprietary application programming interfaces of the different machines. The message may also be transmitted by the messaging system 202 using a connection protocol that is not used by the different machines. The proprietary application programming interfaces of the machines may only be configured to receive messages in a certain format, and the message from the messaging system 202 may not be in any of the specific formats. The one or more plug-ins that are used to translate messages for the different machine may translate the message into the format that is required by each of the proprietary application programming interfaces. The plug-ins may also transmit the message to the machines using the proprietary connection protocol for which the machines are configured to operate. 

Accordingly, the messaging system gateway 206 supports an open plug-in architecture that translates non-native connection protocols, such Phillips Hue.TM., Nest.TM., Belkin Wemo.TM., Insteon.TM., SmartThings.TM., or any other appropriate proprietary, legacy, or new connection protocols, to native protocols and/or to a universal application programming interface of the messaging system 202. In some cases, one or more of the machines or devices themselves may include a messaging system plug-in. Each machine or device that runs proprietary firmware and/or that usesproprietary application programming interfaces can include one or more plug-ins that translate the proprietary communications to and from the connection protocols used by the messaging system 202. The plug-in architecture allows proprietary, legacy (e.g., RS-232 serial, RF, or the like), and/or new machines or devices (e.g., BLE wearable devices or the like) to be registered with and communicate with the messaging system 202regardless of the connection protocol natively used by the machines or devices

The messaging system gateway 206 may include multiple plug-ins. For example, a set of machines or devices may be connected to the messaging system gateway 206. Different plug-ins may be used by different subsets of machines or devices that are connected to the messaging system gateway 206. The different subsets of machines may relate to different classes of devices. For example, machines may be broken into classes based on a manufacturer of the devices, a connection protocol and/or application programming interface used by the devices, or any other appropriate classification. Each of the devices that are connected to the messaging system gateway 206 may be assigned to a logical sub-device that the messaging system gateway 206 keeps track of. The messaging system gateway 206 may assign and map each logical sub-device to a particular plug-in. For example, the messaging system gateway 206 may store a record of all devices, with the record of each connected device including a separate sub-device and plug-in combination. In one example, three Philips Hue.TM. lights and two Nest.TM. smoke alarms may be connected to the messaging system gateway 206 for communicating with the messaging system 202. The messaging system gateway 206 may have a stored record for each device, including five records. The three records for the three Philips Hue.TM. lights may each include a separate sub-device (e.g., sub-device_A, sub-device_B, sub-device_C) and a plug-in that is specifically configured to translate between the messaging system 202native connection protocols and application programming interfaces and the Philips Hue.TM. connection protocols and application programming interfaces. Similar records may be stored for the two Nest.TM. smoke alarms, including two records storing a separate sub-device for each smoke alarm (e.g., sub-device_D, sub-device_E) and a plug-in that is configured to translate between the messaging system 202native connection protocols and application programming interfaces and the Nest.TM. connection protocols and application programming interfaces. In some embodiments, the messaging system gateway 206 may include a single plug-in that is configured to and may translate between multiple proprietary connection protocols and application programming interfaces

In one example of the universal application programming interface, a computing device may be provided with the universal application programming interface. The computing device may be included in a cloud network. For example, the computing device may include one or more network servers of the messaging system 202. The computing device includes one or more data processors and a receiver for receiving a communication originating from a first IoT device. The first IoT device may use a first proprietary application programming interface and the communication may include a command for a second IoT device to perform. The command may be converted from a protocol corresponding to the first proprietary application programming interface to a universal protocol corresponding to the universal application programming interface. For example, either the computing device of the messaging system 202 or a plug-in of a messaging system gateway (e.g., messaging system gateway 206) may convert the command to a format that is understood by the universal application programming interface, as described above. 

The computing device may include a non-transitory computer-readable storage medium containing instructions that, when executed on the one or more data processors, cause the one or more processors to perform various operations. The computing device may determine that the communication is to be transmitted to the second IoT device. For example, the first IoT device may be assigned a first UUID, and the second IoT device may be assigned a second UUID. The received communication may include the second UUID (e.g., in a field of a communication packet). The computing device may determine the identity of the second IoT device based on the second UUID. The computing device may then cause a transmitter to transmit the communication including the command to the second IoT device. The second IoT device may use a second proprietary application programming interface. The command may be converted from the universal protocol corresponding to the universal application programming interface to a protocol corresponding to the second proprietary application programming interface. For example, either the computing device of the messaging system 202 or a plug-in of a messaging system gateway (e.g., messaging system gateway 206) may convert the command to a format that is usable by the second proprietary application programming interface, as described above. 

In some embodiments, the receiver is configured to and may receive a second communication from a third-party messaging account. The transmitter is configured to and may transmit the second communication to the second IoT device. The second communication received from the third-party messaging account controls a function of the second IoT device. For example, the third-party messaging account may be an account of a third-party messaging service, such as Facebook.TM., Twitter.TM., LinkedIn.TM., SMS, or any other messaging service that allows a user of a device to send and receive messages using a registered account. In some embodiments, the second communication includes a message and a tag. The tag identifies a destination program of the second device, such as an application or program that enables a machine or device to send messages using the third-party messaging accounts. For example, the tag may identify an identifier of the application or program. Upon being received by the application or program of the second device, the destination application or program may be opened and the tagged data may be entered into the application or program to activate the indicated function

In some examples of using one or more plug-ins, a computing device may be provided for communicating with the universal application programming interface of the messaging system 202. In some embodiments, the computing device may be included in the messaging system gateway 206, and may execute the one or more plug-ins. In some embodiments, the computing device may be included in a mobile gateway (e.g., mobile gateway 118), and may execute the one or more plug-ins. The computing device includes one or more data processors and a non-transitory computer-readable storage medium containing instructions which when executed on the one or more data processors, cause the one or more processors to perform various operations. The computing device may receive a communication from a first proprietary application programming interface of a first IoT device. The communication may include a command for a second IoT device to perform. For example, the first IoT device may include a smoke alarm and the second IoT device may include a lighting system controller connected to one or more lights. The first and second IoT devices may be connected to the computing device via a local area network, a personal area network, or a combination of a local area network and a personal area network. The command may include a message indicating that the smoke alarm has detected smoke, and that the lighting system controller should turn on the one or more lights. The second IoT device may use the first proprietary application programming interface or a second proprietary application programming interface that is different than the first proprietary application programming interface. The computing device may convert the command from a protocol corresponding to the first proprietary application programming interface to a universal protocol corresponding to the universal application programming interface, using the techniques described above. The computing device may further cause a transmitter to transmit the communication including the converted command to the universal application programming interface of the messaging system 202cloud network. The universal application programming interface may process the command, determine the identity of the second IoT device, and route the command to the appropriate networks or devices so that it can reach the second IoT device. The command may then be received by the second IoT device or a messaging system gateway connected to the second IoT device. A plug-in of the messaging system gateway may then convert the command to a command that can be carried out by a proprietary application programming interface of the second IoT device

The computing device may further receive a second communication from the universal application programming interface that includes a command for a third IoT device to perform. The third device may include another device connected to the computing device via the local area network, the personal area network, or a combination of the local area network and the personal area network. The third IoT deviceuses a third proprietary application programming interface. The computing device may convert the command of the second communication from the universal protocol corresponding to the universal application programming interface to a protocol corresponding to the third proprietary application programming interface used by the third IoT device. The computing device may then cause the transmitter to transmit the second communication including the converted command to the third IoT device

In some embodiments, the computing device determine a sub-device that is mapped to the third IoT device. The computing device may route the second communication to the sub-device mapped to the third IoT device, and may determine a plug-in that applies to the sub-device. As described above, the sub-device is mapped to a particular plug-in and can be used to identify the plug-in to use to convert the command to the proprietary application programming interface

In some embodiments, the computing device may receive a third communication from the universal application programming interface. The third communication may include a message from a third-party messaging account. The third communication from the third-party messaging account controls a function of the second IoT device. For example, the third-party messaging account may be an account of a third-party messaging service, such as Facebook.TM., Twitter.TM., LinkedIn.TM., SMS, or any other messaging service that allows a user of a device to send and receive messages using a registered account. In some embodiments, the third communication includes a message and a tag. The tag identifies a destination program of the second device, such as an application or program that enables a machine or device to send messages using the third-party messaging accounts. For example, the tag may identify an identifier of the application or program. Upon being received by the application or program of the second device, the destination application or program may be opened and the tagged data may be entered into the application or program to activate the indicated function

Working in combination, the messaging system 202 and the messaging system gateway 206 (and/or a mobile gateway) with the plug-ins allow machines or devices to communicate with one another regardless of the proprietary nature of the connection protocols or application programming interfaces that are used by the machines or devices. In the example above including the three Philips Hue.TM. lights and two Nest.TM. smoke alarms, a smoke alarm may communicate with one or more of the lights by sending messages to the messaging system 202 via the messaging system gateway 206. For example, when smoke is detected by the smoke alarm, the smoke alarm may transmit a message to the messaging system gateway 206 instructing all of the lights to turn on. A plug-in of the messaging system gateway 206 may translate the message from the proprietary Nest.TM. format to a generic, native format used by the messaging system 202. The messaging system 202 may determine a destination for the message by referring to one or more UUIDs that are included in the message. In some embodiments, the messaging system 202 may determine a destination based on a query included in the message. For example, the message may indicate that the message is to be sent to all lights that are located within a particular geolocation (e.g., within a certain distance from the smoke alarm). Once the messaging system 202 determines that the destination for the message includes the three lights, the messaging system 202 may process the message using the universal application programming interface. For example, the messaging system 202 may authenticate the smoke alarm using its UUID and token combination, and may determine the security permissions of the smoke alarm in order to verify that the smoke alarm has appropriate access to the lights (e.g., that the smoke alarm is permitted to discover and send messages to the lights). 

As previously described, the system 200 also includes messaging system interface 208, messaging system interface 210, messaging system interface 212, and messaging system interface 214. The machines running the messaging system interfaces 208, 210, 212, 214 may directly connect with the messaging system 202 or may connect with the messaging system gateway 206 using the universal messaging system interfaces 208, 210, 212, 214. In some embodiments, the machine running the messaging system interface 208 may be a closed-network machine that is designed to communicate with a proprietary network in order to transmit and receive communications to other machines that operate using the same proprietary protocols. The messaging system interface 208 allows the machine to communicate directly with the messaging system 202 without going through the proprietary network. By communicating directly with the messaging system 202, the machine can communicate with any machine registered with the messaging system 202regardless of the proprietary nature of the other machines. In some embodiments, the messaging system interfaces 208, 210, 212, or 214 may be an operating system that allows the machine running the messaging system interface 208, 210, 212, or 214 to communicate with the messaging system 202

The built-in universal messaging system interfaces 208, 210, 212, 214 allow the machine running the universal messaging system interfaces 208, 210, 212, 214 to perform operations that native firmware of the machines does not allow them to perform. For example, the messaging system interface 210 may override the native firmware of its machine to allow the machine to perform various operations that are outside of the functionality of the native firmware. In some embodiments, the messaging system interface 210 may be installed on a machine that does not have the ability to communicate with other machines using one or more connection protocols. In such embodiments, the messaging system interface 210 may provide the machine with the capability to use one or more connection protocols. The messaging system interfaces 208, 210, 212, 214 may accessone or more sensors, inputs, outputs, or programs on the machines running them in order to perform various operations. For example, the messaging system interface 212 may have access to and control a geolocation sensor, a compass, a camera, a motion sensor, a temperature sensor, an accelerometer, a gyroscope, a graphical interface input, a keypad input, a touchscreen input, a microphone, a siren, a display, a light, a tactile output, a third-party messaging service that the machine is able to run, or any other component of the machine that can be identified, accessed, and/or controlled. 

The messaging system interfaces 208, 210, 212, 214 may each be assigned a different UUID and token. The messaging system interfaces 208, 210, 212, 214 may connect to the messaging system 202 using the assigned UUID and token, and may await further instructions from the messaging system 202. In some embodiments, the messaging system 202 may act as a compute server that controls the messaging system messaging system interfaces 208, 210, 212, 214. For example, messaging system 202 may activate and/or deactivate pins of the machine running the messaging system interface 214, request sensor data from the machine, and/or cause the messaging system interface 214 to perform other functions related to the machine. In some embodiments, one or more of the messaging system interfaces 208, 210, 212, 214 can be connected to a gateway (e.g., messaging system gateway 206 or a mobile gateway), and the gateway may act as a compute server that controls the messaging system interfaces 208, 210, 212, 214 in a similar manner as the messaging system 202. In some embodiments, the messaging system interfaces 208, 210, 212, 214 may each be a mobile operating system or application that is able to run on mobile device operating systems, such as iOS and Android.TM. operating systems

In one example of using one or more messaging system interfaces, a computing device may be provided. The computing device may include a Raspberry Pi board, an arduino board, a microcontroller, a minicomputer, or any other suitable computing device. The computing device may be built into or integrated with a first device to allow the first device to communicate with other devices. For example, the computing device includes a messaging system interface (or “universal interface“) for enabling one or more sensors of the first device to communicate with one or more sensors of a second device by connecting the one or more sensors with a network server of the messaging system 202. The first device and the second device may be in different locations, such as different rooms of a building, cities, countries, or continents. The first device may include a solar panel located on a roof of a building, and the second device may include a dimmable light bulb located in a room of the building

The proprietary protocol and native firmware of the first device and the proprietary protocol and native firmware of the second device may not allow the devices to exchange communications with each other or with the messaging system 202. The universal interface allows the first device to communicate with the messaging system 202 in order to exchange communications with the second device. The universal interface is configured to and may obtain sensor data from a sensor of the first device. For example, the universal interface may obtain sensor data from the solar panel indicating that it is getting dark outside of the building. The amount of sunlight being received may fall below a certain threshold level as measured by an amount of current being generated by the solar panel using the received sunlight. The universal interface may cause a transmitter of the first device to transmit the sensor data to a network server of the messaging system 202, which may include a cloud network. The universal interface thus allows the first device to transmit sensor data to the messaging system 202 even when the proprietary protocol or firmware of the first device does not allow the one or more sensors of the first device to communicate with other devices. In some embodiments, the universal interface may transmit the sensor data to a messaging system gateway on a LAN and/or a PAN with which the universal interface can communicate. 

The universal interface may further receive a command from the messaging system 202. The command may be received when a sensor of the second device senses a condition. For example, the light may include a photodiode that can sense light. The photodiode may sense natural light, and in response may transmit a message to the messaging system 202 (e.g., using a universal interface installed on the dimmable light) to query whether the solar panel senses sunlight. In this example, the solar panel may have incorrectly determined that it was getting dark in response to the sun going behind a cloud. The command received by the universal interface may cause the sensor of the first device to perform a function. For example, the solar panel may check the amount of current being produced based on the current amount of sunlight being received. The first device may then send a command to the messaging system 202 with updated sensor data

In some embodiments, the universal interface is configured to and may determine a first universally unique identifier assigned to the sensor of the first device, determine a second universally unique identifier assigned to a sensor of the second device, and cause the transmitter to transmit the first universally unique identifier and the second universally unique identifier with the sensor data to the network server. Accordingly, the network server of the messaging system 202 may determine to which device and sensor to transmit the message, and may determine the security access permissions of the first device sensor

In another example, a universal messaging system interface installed on a device may allow multiple sensors within the device to interact in a way that the sensors were not designed to operate. For example, a device may include a thermostat. The thermostat may include a motion sensor that is designed to turn on an LED display when motion is detected. The thermostat may also include a controller that controls the temperature of an air conditioning system. The native proprietary protocol and firmware of the thermostat is not designed to allow the motion detector to be used except to send signals to turn on the LED display as motion is detected. For example, the native proprietary protocol and firmware of the thermostat may not allow the motion sensor and the controller to communicate with one another. The thermostat may be integrated with a computing device (e.g., a Raspberry Pi board, an arduino board, a microcontroller, a minicomputer, or any other suitable computing device) that has a universal messaging system interface installed on it. The universal messaging system interface allows the sensors of the thermostat to communicate with the messaging system 202. For example, the motion sensor and the controller may be assigned separate UUIDs and tokens. The universal messaging system interface may stream motion data from the motion sensor to the messaging system 202. The messaging system 202 may perform one or more functions based on the motion data. For example, the messaging system 202 may include a program that sends a message to the controlleranytime motion is detected by the motion sensor. The program may be created by a user of the thermostat using the designer graphical interface implemented by the platform network 110 or the design interface 120 described above. The program may be stored in the messaging system 202, and may access the motion data and convert motion sensor values to a command that is included in the message. The command may instruct the controller to turn the temperature of the air conditioning system to 72 degrees. Accordingly, sensors of the thermostat that are not designed to communicate with one another can exchange messages using the messaging system and the messaging system interface


Parts List

100

system

102

messaging system

104

device directory

106

data storage

108

analytics database

110

platform network

112

APIs

114

messaging system gateway

116

messaging system interface

118

mobile gateway

120

design interface

200

system

202

messaging system

204

messaging system

206

messaging system gateway

208

messaging system interface

210

messaging system interface

212

messaging system interface

214

messaging system interface


Terms/Definitions

public messaging system

functions

token allow

registered IoT devices

Facebook.TM

display

server-to-server communications

status or state

galileo

air conditioning system

updated connection protocol and/or geolocation

field communication

other systems

other communications

light

messaging system

built-in universal messaging system interface

ability

different connection protocols

other UUIDs

other devices

multicast Domain Name System

inputs and/or outputs

various IoT devices

CoAP

techniques

coordinates

output

controller

new machines or devices

appropriate networks or devices

such a procedure

inter-cloud communications

devices or machines and/or systems

various generic functions

microcontroller

WiFi.TM

generic, native format

different manufacturers

second proprietary connection protocol

offline data analytics

bi-directional persistent connection

SDKs

Windows.TM

other IoT devices

sunlight being

computing device

UUIDs and security tokens

microphone

store and process data

multiple cloud servers

accelerometer

Beagle Bones

PANs

security permissions

MQTT

web-based design interface

demands

particular machine or device

closed-network machine

one or more messaging system plug-ins

different machines

communications or messages

logical sub-device

connection protocols

geolocation sensor data

geolocation sensor

manufacturer

other social media message

design interface

registration request

program code

thermostats

access point

received sunlight

monitored performance

global positioning system

other servers

modem

user

UUIDs and/or security tokens

blocks

web-based application

server-to-server communication

other appropriate machine-to-machine connection protocol

other suitable location

other device or machine

different classes

LED display

park

watches

second cloud network

near real-time

dark outside

state

first server

classes

link

connections

other component

device, system

operation

person or business

similar functionality

LAN side

minicomputer

Linux.TM

other similar devices

specific type

one or more devices

bluetooth

other people

Belkin Wemo.TM

BLE wearable devices

appropriate control system

their unique UUID

one or more peer-to-peer sockets

systems and devices or machines

regardless

registered account

assigned individual UUIDs

multiple developer platforms

messages

separate secure connection

third-party messaging account

installed messaging system interfaces

various devices

ruby

one or more machines or devices

time

global mesh network

processes

devices or systems

outbound communications

occurrence

UUID_MSGSYS

function

single plug-in

common messaging system

compass headings

using one or more communications protocols

programmers

frequency

security access permissions

sports venue

example

mobile device

users

blacklist

dimmable light bulb

CoAP connection protocol

behavior

list and/or array

native proprietary protocol and firmware

communications

secure connection

electronic device

Assigning APIs

control system or flow

consumer products

translation

API, messaging system

local area network

destination UUID

light change

flexibility

network devices

accelerometer data

three records

change

particular location

output functions

message exchange

one or more data processors

personal firewall

facebook

one or more network servers

pedometers

certain embodiments

CoAP device

plug-in combination

indicated function

devices or machines, systems

directory

universal messaging system interface

different proprietary protocol

proprietary network

connected device

machine-to-machine instant message exchange

message queuing telemetry transport

light bulbs

connection protocol

other available sensor

SmartThings.TM

first device

SMS message

siren

code

e.g., gateway

network-connectable device or system

multiple messaging system cloud networks

plug-ins

different connection protocol

gateway

pins

keypad input

multiple proprietary connection protocols

local network

specified sensors

only communications

other embodiments

photo sensor

distinct security token

transmitter

appropriate device, person, system

developer design tool

other transactions

home

different unique identifiers

services

different locations

one or more queries

thermostat

other devices or systems

JavaScript

different machine

unique identifiers

first connection protocol

hypertext transfer protocol

control

assigned UUID

private network signal

routing list

outputs

third communication

sensor data and message exchanges

features

person, device or machine

specific formats

world

multiple cloud servers and/or instances

machine or system

websocket-powered device

order

other protocols

separate machine or device

message routing

non-IoT devices and systems

middleware

other program

cloud services

different rooms

twitter

other suitable communication protocol

bluetooth low energy

registry store

temperature

their own unique UUID

display device

its machine

connects

four machines

protocols

condition

cloud networks

route

XMPP connection protocol

multiple developer platforms and protocols

universal messaging system interfaces

cities

first IoT device

receiver

registered person

XMPP

servers

certain threshold level

different device or system

anytime motion

one server

location

registrant

SNMP connection protocol

inputs

device or machine

suite

color

non-native connection protocols

other cloud networks

states

other levels

lights

field

MQTT connection protocol

other outputs

operations

machine or device, system

API queries

system gateway

other electronic device

input

second UUID

record

event

connected users

thresholds

public messaging systems

only a single messaging system

type

open plug-in architecture

convert motion sensor values

software

messaging API

part

second proprietary application programming interface

Raspberry Pi

artificial intelligence

device and sensor

local gateway

working

native protocol

processed sensor data

light bulb

New York

current amount

second server

either

cloud infrastructure system

LANs and PANs

data

program

control systems

received information

graphical interface input

received communication

similar manner

motion data

mobile gateway (e.g., mobile gateway

server-to-server connection

other suitable SDK

enough processing power

camera

whitelist

its UUID

component or program

single application programming interface request

latency

other suitable mobile device

“smart” light bulbs

UUIDs

network address translations

stock quote

Raspberry Pi board

motion detector

low-energy

application

device or machine, person, and/or system

respective distinct UUID and token

user’s

processing functions

two Nest.TM

detect trends

personal area network

server

route UUID.sub.–1/UUID.sub.–2/UUID.sub.–3/UUID.sub

specified time period

registered person, machine

one or more native connection protocols

machines

message exchanges

one or more data storage systems

communication network

other association

streamed data

three Philips Hue.TM

Intel.TM

ZigBee

multiple plug-ins

offline

certain format

status

registered devices and systems

message transport

room

sunlight

next UUID

Simple Network Management Protocol

compute server

person, device, or system

particular area

Accessible IoT devices

access

first native connection protocol

tagged data

different location

operates

non-issuing system or network

flow

trigger messages

health condition

application or program

path

devices

more detail

multiple cloud networks

one or more software developer kits

routing paradigm

person’s

open source machine-to-machine messaging platform

WiFi

mobile operating system

embodiments

Nest.TM

network server

one or more servers

cloud network

geolocation

router

burglary alarms

properties

unique identifier

sensor

multiple sensors

message

information

capability

response

separate UUIDs and tokens

one or more remote servers

smoke

second device

exchange messages

messaging system plug-in

appropriate destination

mobile phone, tablet, laptop

first UUID

continuous operation

programs

converted command

Network Address Translation Traversal application

different proprietary application programming interfaces

proprietary protocol or firmware

other appropriate connection protocol

independent messaging systems

different APIs

gyroscope

latitude-longitude coordinates

private messaging system

their UUID

device or system

various operations

commands

anytime a change

peer-to-peer connection

web browser

assigned UUID and token

other electronic component

proper security permissions

whitelists and blacklists

proprietary protocol

users, machines or devices, systems, or components

access control

AllJoyn connection protocol

headsets

machine or device

different proprietary protocols

other connection platforms

Bluetooth low Energy.TM

analytics systems

different types

query

clouds

machine’s

system

intermediary

particular plug-in

message schema

multiple servers

request sensor data

destination device

real-time

proprietary nature

first device sensor

same messaging system cloud network

second universally unique identifier

application programming interface

second native connection protocol

motion

events

one or more LANs or PANs

analytics database

component

universal protocol

storage

specific search criteria

devices and sub-devices

certain criteria

sub-device

JavaScript piece

same LAN

examples

two records

cases

particular geolocation (e.g.

criteria

user control system

different points

office

one example

websockets

eavesdropping mode

servers and applications

subscribing device access

payload

mobile device operating systems

device, system, or component

distinct UUID

second connection protocol

designer

other available output

proprietary connection protocol

WiFi network

such Phillips Hue.TM

host

uses

such functionality

own on-premises computers

other clouds

data storage

combination

other IoT device

registration

technology

MDNS

performance

different application programming interface requests

proprietary application programming interfaces

MQTT device

various levels

Internet Protocol address

functionality

components

similar functions

destination program

connected or preferred connection protocol

mobile gateway

proprietary cloud network

Twitter.TM

web page

support

five records

issues

compass

sensor data storage

short-range communication protocol interface

new connection protocols

UUIDs and/or tokens

network

separate computing device

IoT device

Node.JS

social media

device, system, or user

certain distance

other switching mechanism

other computing task

registered IoT device

firmware

third-party messaging accounts

personal devices

token combination

vehicle

peer-to-peer communications session

mobile application APIs

communication packet

instructions

permission

proprietary firmware

built-in universal messaging system interfaces

native connection protocol

level

sales analytics

yuns

city

connected machine or device

destination application or program

messaging system interface

wide area network

physical form

employees and affiliates

application programming interfaces

further details

home and/or office computer

other data APIs

second native connection protocols

tactile output

lighting system controller

different cloud network

registered machine or device

sub-device mapped

various commands

list or array

workspace

whitelists and/or blacklists operates

messaging systems

photodiode

python

plug-ins allow machines

other network external

appropriate access level permission

public cloud network

particular instance

different plug-ins

smoke alarm

Presence Protocol

supported protocol

websockets connection protocol

machine or device changes locations

private cloud network

second communication

separate list or array

one or more radios

further instructions

wired interface

first cloud network

IoT devices

platform network

other analytics

different IoT devices

roof

computing devices

computers

one or more processors

unique identifier and/or token

one example, three Philips Hue.TM

number

radio frequency

header field

other suitable protocol

temperature sensor

other suitable connection protocol

multiple devices and/or systems

other functions

LinkedIn.TM

one or more UUIDs

security feature

connecting devices

second device (

LANs

different networks

device

capabilities

its own UUID

device’s

arduino board

Internet Things

protocol

different connection protocols or interfaces

its distinct UUID

such a mesh network

partial real-time

other device

machines or devices

gateway software

first proprietary application programming interface

inbound

other third-party messaging account

proprietary application programming interface

Infrared Data Association

real-time exchange

list

things

messaging system gateway

accelerator

different native connection protocols

real-time monitoring

temporarily offline

various components

signals

applications

stored record

one device or system

amount

certain data

england

one or more messaging system interfaces

machine learning

native firmware

sensors

firewall

network access

different UUID and token

mobile phone

one or more connection protocols

systems

application protocol

native connection protocols

security token

format

third proprietary application programming interface

system or device

analytics engines

specific machine or device

other payloads

device directory

natural light

people

devices and/or systems

devices and systems

machine-to-machine instant messages

one or more sensors

operating system

messaging system software

london

third-party messaging service

one or more existing programs

communication

standalone physical device

separate devices or machines

UUID_MSGSYSINT

demand

remote location

one or more messaging system gateways

different users

token

messaging

destination

continents

Android.TM

assigned unique identifier

one or more components or programs

online

light occurs

network servers

messaging system gateway or hub

control system

Transmission Control Protocol

one or more computers

given point

one or more lights

other feature

changes

one or more analytics engines

proprietary Nest.TM

Insteon.TM

other appropriate classification

one or more plug-ins

ZigBee.TM

other suitable computing device

one or more computing devices

public cloud

ordinary skill

building

tokens

command

various network devices

proprietary communications

various lists

resulting message or payload

SMS messages

plug-in

Extensible Messaging

first proprietary connection protocol

motion sensor

other messaging service

other suitable search criteria

same proprietary protocols

outside environment

track

wearable technology

such embodiments

messaging system messaging system interfaces

identifier

system and/or device

similar plug-ins

other appropriate period

mesh network

one or more functions

sensor data

search criteria

different subsets

one or more proprietary application programming interfaces

Bluetooth.TM

issued UUIDs and/or security tokens

other suitable firewall

proprietary cloud server

third IoT device

various properties

other devices or machines

internet

current being

touchscreen input

Philips Hue.TM

only select devices, systems

native protocols

one or more dynamic routing protocols

single message transmitted

first universally unique identifier

several ways

queries

machines or devices, systems

smoke alarms

different proprietary cloud network

universal interface

similar records

UUID

cloud infrastructure

account

systems and/or devices

devices, systems

HTTP connection protocol

same routing technique

anything

other networks

User Datagram Protocol

printer

devices or machines

particular company

third device

third-party messaging services

instance

second IoT device

four levels

AllJoyn

suitable machine-to-machine connection protocol

identity

additional information

known connections

machine

intranet

SNMP

available data

HTTP

mobile messaging system gateway

multiple connection protocols

appropriate access

designer graphical interface

destination device’s

second IoT devices

whitelists and/or blacklists

non-transitory computer-readable storage medium containing instructions

cloud

home automation devices

universal application programming interface

sub-device_D

solar panel

person

system diagram

messaging system interfaces

plug-in architecture

three lights

requests

system interface

countries

transmitted

wireless router or modem

remote servers

general format

various devices and/or systems

multiple networks

APIs

LinkedIn

one or more machines

other machines

connection

updated sensor data

dimmable light

weather service APIs

its destination

separate sub-device

Internet of Things


Drawings

Brief Description:

illustrates an IoT system 100 in accordance with one embodiment.

Detailed Description:

Figure 1 illustrates an IoT system 100 in one embodiment. The IoT system 100 comprises IoT devices 102 communicatively coupled via a wide area network 104 to a server system 106 via an optional proxy server 108. The network topology of the IoT system 100 is hub-and-spoke. Each of the IoT devices 102 has a 1:1 communication channel to the server system 106 and each of the IoT devices 102 communicates with the others, if at all, via the server system 106. The optional proxy server 108 may improve the performance of the IoT system 100 by mirroring some or all of the state of the server system 106 and thus enabling the IoT devices 102 to communicate without creating bandwidth or incurring the latency of the wide area network 104. The optional proxy server 108 is typically colocated at a facility or nearby facility to where the IoT devices 102 are located.

Brief Description:

illustrates an IoT system 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates an IoT system 200 in one embodiment. The IoT system 200 comprises IoT devices 202 communicatively coupled via a wide area network 204 to a server system 206 via an optional proxy server 210. The network topology of the IoT system 200 is a hybrid hub-and-spoke. One or more of the IoT devices 202 acts as a gateway device 208 providing a communication channel to the server system 206. The IoT devices 102 that are not the gateway device 208 communicate directly with the gateway device 208, or via the proxy server 210, which communicates on their behalf and on its own behalf with the server system 206. The optional proxy server 210 may improve the performance of the IoT system 200 by mirroring some or all of the state of the server system 206 and thus enabling the IoT devices 202 to communicate without creating bandwidth or incurring the latency of the wide area network 204. The optional proxy server 210 is typically colocated at a facility or nearby facility to where the IoT devices 202 are located.

Brief Description:

illustrates an IoT system 300 in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates an IoT system 300 in one embodiment. The IoT system 300 comprises IoT devices 302 communicatively coupled via a wide area network 304 to a server system 306 via an optional proxy server 310. The network topology of the IoT system 300 is a partially connected mesh network. The IoT devices 102 are organized into groups of fully connected meshes, and communicate within a mesh group without interacting with the server system 306 or proxy server 310. In other embodiments, there may be one fully connected mesh of the IoT devices 102, although this requires that each of the IoT devices 102 is in direct communication range of all of the others.

One or more of the IoT devices 302 acts as a gateway device 308 providing a communication channel to the server system 306. The IoT devices 102 that are not the gateway device 308 communicate directly with the gateway device 308, or via the proxy server 310, which communicates on their behalf and on its own behalf with the server system 306. The optional proxy server 310 may improve the performance of the IoT system 300 by mirroring some or all of the state of the server system 306 and thus enabling the IoT devices 302 to communicate without creating bandwidth or incurring the latency of the wide area network 304. The optional proxy server 310 is typically colocated at a facility or nearby facility to where the IoT devices 302 are located.

Brief Description:

illustrates an IoT system 400 in accordance with one embodiment.

Detailed Description:

Figure 4 illustrates an IoT system 400 in one embodiment. The IoT system 400 comprises IoT devices 402 communicatively coupled via a wide area network 404 to a server system 406 via an optional proxy server 410. The network topology of the IoT system 400 is a partially connected mesh network. The IoT devices 102 are organized into groups of partially connected meshes, and communicate within a mesh group without interacting with the server system 406 or proxy server 410. This type of network topology may be found in environments in which the IoT devices 402 are spread apart and battery powered, so that they can only communicate using relatively short-range wireless communications (e.g., near-field communications). In such environments a particular one of the IoT devices 402 may only be within communication range of a nearest neightbor. 

One or more of the IoT devices 402 acts as a gateway device 408 providing a communication channel to the server system 406. The IoT devices 102 that are not the gateway device 408 communicate directly with the gateway device 408, or via the proxy server 410, which communicates on their behalf and on its own behalf with the server system 406. The optional proxy server 410 may improve the performance of the IoT system 400 by mirroring some or all of the state of the server system 406 and thus enabling the IoT devices 402 to communicate without creating bandwidth or incurring the latency of the wide area network 404. The optional proxy server 410 is typically colocated at a facility or nearby facility to where the IoT devices 402 are located.

Brief Description:

illustrates an embodiment of an IoT device 500 to implement components and process steps of the system described herein.

Detailed Description:

Figure 5 illustrates an embodiment of an IoT device 500 to implement components and process steps of IoT devices described herein.

Input devices 504 comprise transducers that convert physical phenomenon into machine internal signals, typically electrical, optical or magnetic signals. Signals may also be wireless in the form of electromagnetic radiation in the radio frequency (RF) range but also potentially in the infrared or optical range. Examples of input devices 504 are keyboards which respond to touch or physical pressure from an object or proximity of an object to a surface, mice which respond to motion through space or across a plane, microphones which convert vibrations in the medium (typically air) into device signals, scanners which convert optical patterns on two or three dimensional objects into device signals. The signals from the input devices 504 are provided via various machine signal conductors (e.g., busses or network interfaces) and circuits to memory 506

The memory 506 is typically what is known as a first or second level memory device, providing for storage (via configuration of matter or states of matter) of signals received from the input devices 504, instructions and information for controlling operation of the CPU 502, and signals from storage devices 510

The memory 506 and/or the storage devices 510 may store computer-executable instructions and thus forming logic 514 that when applied to and executed by the CPU 502 implement embodiments of the processes disclosed herein.

Information stored in the memory 506 is typically directly accessible to the CPU 502 of the device. Signals input to the device cause the reconfiguration of the internal material/energy state of the memory 506, creating in essence a new machine configuration, influencing the behavior of the IoT device 500 by affecting the behavior of the CPU 502 with control signals (instructions) and data provided in conjunction with the control signals. 

Second or third level storage devices 510 may provide a slower but higher capacity machine memory capability. Examples of storage devices 510 are hard disks, optical disks, large capacity flash memories or other non-volatile memory technologies, and magnetic memories. 

The CPU 502 may cause the configuration of the memory 506 to be altered by signals in storage devices 510. In other words, the CPU 502 may cause data and instructions to be read from storage devices 510 in the memory 506 from which may then influence the operations of CPU 502 as instructions and data signals, and from which it may also be provided to the output devices 508. The CPU 502 may alter the content of the memory 506 by signaling to a machine interface of memory 506 to alter the internal configuration, and then converted signals to the storage devices 510 to alter its material internal configuration. In other words, data and instructions may be backed up from memory 506, which is often volatile, to storage devices 510, which are often non-volatile.

Output devices 508 are transducers which convert signals received from the memory 506 into physical phenomenon such as vibrations in the air, or patterns of light on a machine display, or vibrations (i.e., haptic devices) or patterns of ink or other materials (i.e., printers and 3-D printers).  

The network interface 512 receives signals from the memory 506 and converts them into electrical, optical, or wireless signals to other machines, typically via a machine network. The network interface 512 also receives signals from the machine network and converts them into electrical, optical, or wireless signals to the memory 506.

Brief Description:

 illustrates an embodiment of an  IoT device 600.

Detailed Description:

Referring to Figure 6, an IoT IoT device 600 in one embodiment comprises an antenna 602, control logic 604wireless communication logic 606, a memory 608, a power manager 610, a battery 612, logic 616, and user interface logic 614.

The control logic 604 controls and coordinates the operation of other components as well as providing signal processing for the IoT device 600. For example control logic 604 may extract baseband signals from radio frequency signals received from the wireless communication logic 606 logic, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication logic 606 logic. Control logic 604 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication logic 606 may further comprise memory 608 which may be utilized by the control logic 604 to read and write instructions (commands) and data (operands for the instructions). The memory 608 may comprise logic 616 to carry out aspects of the processes disclosed herein, e.g., those aspects executed by a smart phone or other mobile device. 

A human user or operator of the IoT device 600 may utilize the user interface logic 614 to receive information from and input information to the IoT device 600. Images, video and other display information, for example, user interface optical patterns, may be output to the user interface logic 614, which may for example operate as a liquid crystal display or may utilize other optical output technology. The user interface logic 614 may also operate as a user input device, being touch sensitive where contact or close contact by a use’s finger or other device handled by the user may be detected by transducers. An area of contact or proximity to the user interface logic 614 may also be detected by transducers and this information may be supplied to the control logic 604 to affect the internal operation of the IoT device 600 and to influence control and operation of its various components. 

Audio signals may be provided to user interface logic 614 from which signals output to one and more speakers to create pressure waves in the external environment representing the audio. The IoT device 600 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The IoT device 600 may operate on power received from a battery 612. The battery 612 capability and energy supply may be managed by a power manager 610

The IoT device 600 may transmit wireless signals of various types and range (e.g., cellular, GPS, WiFi, BlueTooth, and near field communication i.e. NFC). The IoT device 600 may also receive these types of wireless signals. Wireless signals are  transmitted and received using wireless communication logic 606 logic coupled to one or more antenna 602. Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

Brief Description:

 illustrates an embodiment of an  IoT device 700.

Detailed Description:

Referring to the IoT  IoT device 700 of Figure 7, signal processing and system control 706 controls and coordinates the operation of other components as well as providing signal processing for the IoT device 700. For example signal processing and system control 706 may extract baseband signals from radio frequency signals received from the wireless communication 708 logic, and processes baseband signals up to radio frequency signals for communications transmitted to the wireless communication 708 logic. Signal processing and system control 706 may comprise a central processing unit, digital signal processor, and/or one or more controllers or combinations of these components. 

The wireless communication 708 may further comprise memory 716 which may be utilized by the signal processing and system control 706 to read and write instructions (commands) and data (operands for the instructions). 

A human user or operator of the IoT device 700 may utilize the user interface 722 to receive information from and input information to the IoT device 700. Images, video and other display information, for example, user interface optical patterns, may be output to the user interface 722, which may for example operate as a liquid crystal display or may utilize other optical output technology. The user interface 722 may also operate as a user input device, being touch sensitive where contact or close contact by a use’s finger or other device handled by the user may be detected by transducers. An area of contact or proximity to the user interface 722 may also be detected by transducers and this information may be supplied to the signal processing and system control 706 to affect the internal operation of the IoT device 700 and to influence control and operation of its various components. 

A camera 724 may interface to image processing 726 logic to record images and video from the environment. The image processing 726 may operate to provide image/video enhancement, compression, and other transformations, and from there to the signal processing and system control 706 for further processing and storage to memory 716. Images and video stored in the memory 716 may also be read by the signal processing and system control 706 and output to the user interface 722 for display to a user of the IoT device 700.

Audio signals may be provided to user interface 722 from which signals output to one and more speakers to create pressure waves in the external environment representing the audio. The IoT device 700 may convert audio phenomenon from the environment into internal electro or optical signals by operating a microphone and audio circuit (not illustrated). 

The IoT device 700 may operate on power received from a battery 720. The battery 720 capability and energy supply may be managed by a power manager 718

The IoT device 700 may transmit wireless signals of various types and range (e.g., cellular, WiFi, BlueTooth, and near field communication i.e. NFC). The IoT device 700 may also receive these types of wireless signals. Cellular wireless signals are  transmitted and received using wireless communication 708 logic coupled to one or more antenna 702. Shorter-range wireless signals may be transmitted and received via antenna 704 and wireless communication logic 728. Other forms of electromagnetic radiation may be used to interact with proximate devices, such as infrared (not illustrated).

The device may utilize a haptic driver 732 which controls a vibration generator 714 to cause vibrations in response to events identified by signal processing and system control 706, such as the received text messages, emails, incoming calls or other events that require the user or the device’s attention.

A subscriber identity module ( SIM 710 ) may be present in some mobile devices, especially those operated on the Global System for Mobile Communication (GSM) network. The SIM 710 stores, in machine-readable memory, personal information of a mobile service subscriber, such as the subscriber’s cell phone number, address book, text messages, and other personal data. A user of the IoT device 700 can move the SIM 710 to a different and maintain access to their personal information. A SIM 710 typically has a unique number which identifies the subscriber to the wireless network service provider.

The IoT device 700 may include an audio driver 730 including an audio encoder/decoder for encoding and decoding digital audio files or audio files stored by memory 716, SIM 710, or received in real time via  one of the antenna 702, antenna 704. The audio driver 730 is controlled by the signal processing and system control 706 and decoded audio is provided to one and more speaker 712 to create pressure waves in the external environment representing the audio.

Brief Description:

illustrates a diagrammatic representation of an IoT device 800 in the form of a computer system within which a set of instructions may be executed for causing the machine to perform any one or more of the IoT functionalities discussed herein, according to an example embodiment.

Detailed Description:

Figure 8 illustrates a diagrammatic representation of an IoT device 800 in the form of a computer system within which a set of instructions may be executed for causing the IoT device 800 to perform any one or more of the methodologies discussed herein, according to an example embodiment. Specifically, Figure 8 shows a diagrammatic representation of the IoT device 800 in the example form of a computer system, within which instructions 808 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the IoT device 800 to perform any one or more of the methodologies discussed herein may be executed. 

The instructions 808 transform the general, non-programmed IoT device 800 into a particular IoT device 800 programmed to carry out the described and illustrated functions in the manner described.  In alternative embodiments, the IoT device 800 operates as a standalone device or may be coupled (e.g., networked) to other machines.  In a networked deployment, the IoT device 800 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.  The IoT device 800 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 808, sequentially or otherwise, that specify actions to be taken by the IoT device 800

Further, while only a single IoT device 800 is illustrated, the term “machine” shall also be taken to include a collection of machines 200 that individually or jointly execute the instructions 808 to perform any one or more of the methodologies discussed herein.

The IoT device 800 may include processors 802, memory 804, and I/O components 842, which may be configured to communicate with each other such as via a bus 844.  In an example embodiment, the processors 802 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 806 and a processor 810 that may execute the instructions 808.  The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously.  Although Figure 8 shows multiple processors 802, the IoT device 800 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof.

The memory 804 may include a main memory 812, a static memory 814, and a storage unit 816, both accessible to the processors 802 such as via the bus 844.  The main memory 804, the static memory 814, and storage unit 816 store the instructions 808 embodying any one or more of the methodologies or functions described herein.  The instructions 808 may also reside, completely or partially, within the main memory 812, within the static memory 814, within machine-readable medium 818 within the storage unit 816, within at least one of the processors 802 (e.g., within the processor’s cache memory), or any suitable combination thereof, during execution thereof by the IoT device 800

The I/O components 842 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on.  The specific I/O components 842 that are included in a particular machine will depend on the type of machine.  For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device.  It will be appreciated that the I/O components 842 may include many other components that are not shown in Figure 8.  The I/O components 842 are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting.  In various example embodiments, the I/O components 842 may include output components 828 and input components 830.  The output components 828 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth.  The input components 830 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.

In further example embodiments, the I/O components 842 may include biometric components 832, motion components 834, environmental components 836, or position components 838, among a wide array of other components.  For example, the biometric components 832 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like.  The motion components 834 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth.  The environmental components 836 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment.  The position components 838 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.

Communication may be implemented using a wide variety of technologies.  The I/O components 842 may include communication components 840 operable to couple the IoT device 800 to a network 820 or devices 822 via a coupling 824 and a coupling 826, respectively.  For example, the communication components 840 may include a network interface component or another suitable device to interface with the network 820.  In further examples, the communication components 840 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities.  The devices 822 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).

Moreover, the communication components 840 may detect identifiers or include components operable to detect identifiers.  For example, the communication components 840 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals).  In addition, a variety of information may be derived via the communication components 840, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.

EXECUTABLE INSTRUCTIONS AND MACHINE STORAGE MEDIUM

The various memories (i.e., memory 804, main memory 812, static memory 814, and/or memory of the processors 802) and/or storage unit 816 may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.  These instructions (e.g., the instructions 808), when executed by processors 802, cause various operations to implement the disclosed embodiments.


Parts List

100

IoT system

102

IoT devices

104

wide area network

106

server system

108

proxy server

200

IoT system

202

IoT devices

204

wide area network

206

server system

208

gateway device

210

proxy server

300

IoT system

302

IoT devices

304

wide area network

306

server system

308

gateway device

310

proxy server

400

IoT system

402

IoT devices

404

wide area network

406

server system

408

gateway device

410

proxy server

500

IoT device

502

CPU

504

input devices

506

memory

508

output devices

510

storage devices

512

network interface

514

logic

600

IoT device

602

antenna

604

control logic

606

wireless communication logic

608

memory

610

power manager

612

battery

614

user interface logic

616

logic

700

IoT device

702

antenna

704

antenna

706

signal processing and system control

708

wireless communication

710

SIM

712

speaker

714

vibration generator

716

memory

718

power manager

720

battery

722

user interface

724

camera

726

image processing

728

wireless communication logic

730

audio driver

732

haptic driver

800

IoT device

802

processors

804

memory

806

processor

808

instructions

810

processor

812

main memory

814

static memory

816

storage unit

818

machine-readable medium

820

network

822

devices

824

coupling

826

coupling

828

output components

830

input components

832

biometric components

834

motion components

836

environmental components

838

position components

840

communication components

842

I/O components

844

bus


Terms/Definitions

MAC layer

media access control sublayer, a layer 2 communication technology that along with the logical link control (LLC) sublayer together make up the data link layer. Within that data link layer, the LLC provides flow control and multiplexing for the logical linkwhile the MAC provides flow control and multiplexing for the transmission medium. These two sublayers together correspond to layer 2 of the OSI model. In devices implementing IEEE 802 standards, the MAC provides a control abstraction of the physical layer such that the complexities of physical link control are invisible to the LLC and upper layers of the network stack. Thus any LLC block (and higher layers) may be used with any MAC. In turn, the MAC is formally connected to the PHY via a media-independent interface. The MAC is typically integrated with the PHY within the same device package, although in theory any MAC may be used with any PHY, independent of the transmission medium.

IPSEC

Internet Protocol Security, a set of protocols that provide authentication and encryption to Internet Protocol (IP) packets, adding an extra layer of security on IP communications.

Bluetooth Low Energy (BLE)

a version of Bluetooth technology that consumes lower power than conventional Bluetooth. BLE is desigend for use by portable devices and networking implementations such as Bluetooth Mesh, a Bluetooth topology that allows devices to be connected together, sending/repeating commands from the hub to any connected device. Apple’s iBeacon is an example of a BLE application.

IGMP

Internet Group Management Protocol, a communication protocol based on the IP protocol and is used to support group communication. IGMP allows for IP-multicasting that enables the transmission of IP packages to many receivers with one transmission.

802.11

a family of wireless communication protocols and technologies commonly referred to as WiFi. Examples of 802.11 are variations such as 802.11a, 802.11b, 802.11g, 802.11ah, and 802.11i.

IIOT

Industrial Internet of Things, encompassing connected large-scale machinery and industrial systems such as factory-floor monitoring, HVAC, smart lighting, and security. For example, equipment can send real-time information to an application so operators can better understand how efficiently that equipment is running. Also referred to as Industry 4.0, Industrie 4.0, and Industrial IoT.

Thread protocol

an IPv6-based, low-power mesh networking technology for IoT products, based on 6LoWPAN.

NFC

near field communications,a set of communication protocols that enable two electronic devices, one of which is usually a portable device such as a smartphone, to establish communication by bringing them within 4 cm (1.6 in) of each other. NFC devices are often used in contactless payment systems, similar to those used in credit cards and electronic ticket smartcards and allow mobile payment to replace/supplement these systems. This is sometimes referred to as NFC/CTLS (Contactless) or CTLS NFC. NFC is used for social networking, for sharing contacts, photos, videos or files. NFC-enabled devices can act as electronic identity documents and keycards. NFC offers a low-speed connection with simple setup that can be used to bootstrap more capable wireless connections

6LoWPAN

a communication protocol that compresses IPv6 packages for communication by small, low power-devices.

PHY

the physical layer of the OSI model, the circuitry required to implement physical layer functions.

Sigfox

a low-bandwidth, wireless protocol that provides improved range and obstacle penetration for short messages over some other IoT communication technologies.

RFID

radio frequency ID, devices and systems that utilize electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically-stored information. Passive tags collect energy from a nearby RFID reader’s interrogating radio waves. Active tags have a local power source (such as a battery) and may operate hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object.

gateway

a device that operates to bridge communication between two network systems.

iBeacon

a technology introduced by Apple that uses sensors to locate iOS or Android devices indoors and can send them notifications via Bluetooth Low Energy (BLE).

IPv6

a newer Internet protocol that provides more addresses than the IPv4 protocol. An IPv6 address is a 128-bit alphanumeric string that identifies an endpoint device in the Internet Protocol Version 6 (IPv6) addressing scheme.

beacon

wireless devices that communicate location signals indoors, typically without the need for GPS.

L2TP

Layer 2 Tunneling Protocol, a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by Internet Service Providers. It does not provide any encryption or confidentiality by itself, relying on an encryption protocol that it passes within the tunnel to provide privacy.

Zigbee

short range wireless networking protocol that primarily operates on the 2.4 GHz frequency spectrum. Zigbee devices connect in a mesh topology, forwarding messages from controlling nodes to slaves, which repeat commands to other connected nodes.

LPLN

Low Power Lossy Networks, networks comprised of embedded devices with limited power, memory, and processing resources. LPLNs are typically optimized for energy efficiency, may use BLE and can be applied to industrial monitoring, building automation, connected homes, healthcare, environmental monitoring, urban sensor networks, asset tracking, and more.

Bluetooth

a familiy of wireless communication technologies for exchanging data over short distances (using short-wavelength UHF radio waves in the ISM band from 2.4 to 2.485 GHz[3]) from fixed and mobile devices. Variations of Bluetooth are many, including Bluetooth Low Energy, Class 1 Bluetooth (for communications over 100m, up to 1km) and Class 2 Bluetooth (10-20m range).

access point

a node that allows users or devices to authenticate to and utilize a network. Access points often implement 802.11 wireless communication.

Cloud Computing Environment


Drawings

Brief Description:

illustrates a schematic diagram of a cloud computing environment in which embodiments of the present invention may be implemented. 

Detailed Description:

Figure 1 is a schematic diagram of a cloud computing environment 102 in II embodiments of the present invention may be implemented. As shown, cloud computing environment 102 includes one or more cloud computing nodes 104 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 112, desktop computer 110, laptop computer 106, and/or automobile computer system 108 may communicate. Nodes 104 may communicate with one another. The nodes 104 may be grouped (not shown) physically or virtually, in one or more networks, such as private, community, public, or hybrid clouds as described hereinabove, or a combination thereof, which allows cloud computing environment 102 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices 112-N shown in Figure 1 are intended to be illustrative only and that computing nodes 104 and cloud computing environment 102 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser). 

Brief Description:

illustrates a diagram of abstraction model layers of a cloud computing environment in which embodiments of the present invention may be implemented. 

Detailed Description:

Figure 2 is a diagram of abstraction model layers of a cloud computing environment in which embodiments of the present invention may be implemented. In Figure 2, a set of functional abstraction layers provided by cloud computing environment 102 (Figure 1) is shown. It should be understood in advance that the components, layers, and functions shown in Figure 2 are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided: 

hardware and software layer 208 includes hardware and software components. Examples of hardware components include: mainframes 242; RISC (Reduced Instruction Set Computer) architecture based servers 244; servers 246; blade servers 248; storage devices 250; and networks and networking components 252. In some embodiments, software components include network application server software 254 and database software 256

Virtualization layer 206 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 226; virtual storage 234; virtual networks 236, including virtual private networks; virtual applications and operating systems 238; and virtual clients 240

In one example, management layer 204 may provide the functions described below. Resource provisioning 212 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and pricing 214provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 216 provides access to the cloud computing environment for consumers and system administrators. Service level management 218 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 222 provide pre-arrangement for, and procurement of, cloudcomputing resources for which a future requirement is anticipated in accordance with an SLA. 

Workloads layer 202 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 210; software development and lifecycle management 220; virtual classroom education delivery 224; data analytics processing 228; transaction processing 230; and file transfer processing 232


Parts List

100

item

102

cloud computing environment

104

computing nodes

106

laptop computer

108

automobile computer system

110

desktop computer

112

cellular telephone

202

workloads workloads layer

204

management layer

206

Virtualization layer

208

mapping and navigation

210

hardware and software layer

212

resource provisioning

214

metering and pricing

216

user portal

218

service level management

220

software development and lifecycle management

222

Service Level Agreement (SLA) planning and fulfillment

224

virtual classroom education delivery

226

virtual servers

228

data analytics processing

230

transaction processing

232

file transfer processing

234

virtual storage

236

virtual networks

238

virtual applications and operating systems

240

virtual clients

242

mainframes

244

RISC (Reduced Instruction Set Computer) architecture based servers

246

servers

248

blade servers

250

storage devices

252

networks and networking components

254

network application server software

256

database software


Terms/Definitions

identity verification

functional abstraction layers

hardware and software layer

following layers

network and/or network

resources

networks and networking components

examples

public

local computing devices

servers

virtual storage

cloud consumers and tasks

virtual classroom education delivery

type

functionality

provide cost tracking

computerized device

one or more cloud computing nodes

layers

computing nodes

blade servers

personal digital assistant

billing or invoicing

dynamic procurement

pre-arrangement

invention

one or more networks

virtual entities

network application server software

procurement

virtual applications and operating systems

virtual servers

connection

hybrid clouds

example

Service Level Agreement (SLA) planning and fulfillment

application software licenses

fulfillment

cloud computing environment

combination

required service levels

protection

functions

diagram

types

computing devices

pricing

web browser

abstraction model layers

cloud

automobile computer system

software components

schematic diagram

data

user portal

cloud consumers

Virtualization layer

cloud consumer

mainframes

tasks

RISC (Reduced Instruction Set Computer) architecture based servers

database software

software development and lifecycle management

data analytics processing

private

community

transaction processing

workloads and functions

metering and pricing

following examples

present invention

computing resources

service level management

abstraction layer

advance

cellular telephone

nodes

II embodiments

hardware components

embodiments

storage devices

local computing device

consumption

infrastructure

laptop computer

platforms and/or software

virtual networks

networks

file transfer processing

virtual clients

access

services

one example

consumers and system administrators

components

management layer

resource provisioning

hardware and software components

security

cloud computing resource allocation and management

future requirement

other resources

mapping and navigation

virtual private networks

desktop computer

computer

workloads layer

operating systems

Blockchain


Drawings

Brief Description:

illustrates a blockchain transaction process 100 in accordance with one embodiment.

Detailed Description:

Referring to Figure 1, a blockchain is an ever-growing set of data blocks. Each block records a collection of transactions. Blockchains distribute these transactions across a group of computers. Each computer maintains its own copy of the blockchain transactions.

A blockchain is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically comprises a cryptographic hash of the previous block,a timestamp, and transaction data. By design, a blockchain is resistant to modification of the data. Blockchains may implement an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way.

A blockchain is typically managed by multiple parties collectively adhering to a protocol for inter-node communication and validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, which requires consensus among the operators.

Cryptography involving mathematical methods of keeping data secret and proving identity is utilized when recording transactions. One digital key ensures only an owner can enter a transaction to the blockchain involving their assets, and another digital key lets other parties confirm it really was the owner who added the transaction.

Blockchain is resistant to tampering or other changes by utilizing a cryptographic technique called the hash. Hashing reduces data to a sequence of seemingly random characters — for example, the hash of the phrase “the quick brown fox” is “9ECB36561341D18EB65484E833EFEA61EDC74B84CF5E6AE1B81C63533E25FC8F” using a hash method called SHA-256. Tweaking just one letter in the phrase produces a completely different hash, and you can’t go backward to figure out the original data from the hash.

With blockchain, hashes are linked together so any minute change is immediately visible, not just for the block housing it but for all other blocks added later. With red flags that big for changes that small, auditing becomes easier.

Brief Description:

illustrates a blockchain formation 200 in accordance with one embodiment.

Detailed Description:

Figure 2 illustrates an exemplary blockchain formation 200. The mainchain 204 (M blocks) comprises the longest series of blocks from the start block 202 (S block) to the current block. Orphan blocks 206 (O blocks) exist outside of the main chain.

Blocks hold batches of valid transactions that are hashed and encoded, for example into a Merkle tree. Each block includes the cryptographic hash of the prior block in the blockchain formation 200, linking the two. The linked blocks form a chain. This iterative process confirms the integrity of the previous block, all the way back to the original start block 202

Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, the blockchain formation 200 has a specified algorithm for scoring different versions of the history so that one with a higher value can be selected over others. Blocks not selected for inclusion in the mainchain 204 are called orphan blocks 206. Peers supporting the blockchain formation 200 have different versions of the history from time to time. They keep only the highest-scoring version of the blockchain formation 200 known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their local version of the blockchain formation 200 and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of the history forever. Because blockchains are typically built to add the score of new blocks onto old blocks and because there are incentives to work only on extending with new blocks rather than overwriting old blocks, the probability of an entry becoming superseded goes down exponentially as more blocks are built on top of it, eventually becoming very low. For example, in a blockchain using the proof-of-work system, the chain with the most cumulative proof-of-work is always considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.

Brief Description:

illustrates a blockchain 300 in accordance with one embodiment.

Detailed Description:

Figure 3 illustrates an embodiment of an irreversible transaction blockchain 300. The blockchain 300 is a sequence of digitally signed transactions ( transaction 1 302, transaction 2 304, and transaction 3 306 etc.). Each transaction includes the current owners public key ( block 1 owner public key 308, block 2 owner public key 310, and block 3 owner public key 312 respectively) and the previous owner’s signature ( O(0) signature 314, O(1) signature 316, and O(2) signature 318) which are generated using a hash function. The owner of a transaction can examine each previous transaction to verify the chain of ownership. Unlike traditional check endorsements, the transactions in the blockchain 300 are irreversible, which mitigates fraud.


Parts List

100

blockchain transaction process

102

transaction requesting device

200

blockchain formation

202

start block

204

mainchain

206

orphan blocks

300

blockchain

302

transaction 1

304

transaction 2

306

transaction 3

308

block 1 owner public key

310

block 2 owner public key

312

block 3 owner public key

314

O(0) signature

316

O(1) signature

318

O(2) signature


Terms/Definitions

soft fork

a change of rules that creates blocks recognized as valid by the old software, i.e. it is backwards-compatible.

orphaned blocks

blocks in an abandoned fork.

block time

the average time it takes for the network to generate one extra block in the blockchain.

accidental fork

a branching in the chain that happens when two or more miners find a block at nearly the same time. The fork is resolved when subsequent block(s) are added and one of the chains becomes longer than the alternative(s). The network abandons the blocks that are not in the longest chain (they are called orphaned blocks).

hard fork

a rule change such that the software validating blocks according to the old rules will detect the blocks produced according to the new rules as invalid.

fork

what happens when a blockchain diverges into two potential paths forward.

smart contract

smart contracts are contracts that can be partially or fully executed or enforced without human interaction.[85] One of the main objectives of a smart contract is automated escrow. The IMF believes smart contracts based on blockchain technology could reduce moral hazards and optimize the use of contracts in general.[86] Due to the lack of widespread use their legal status is unclear.[86] ablockchain implementation that enables the coding of contracts that execute when specified conditions are met. A blockchain smart contract is enabled by logic that defines and executes an agreement. For example, Ethereum Solidity is an open-source blockchain project that was built specifically to realize this possibility by implementing a Turing-complete programming language capability to implement smart contracts.

private blockchain

(i.e., permissioned blockchains) blockchains that use an access control layer to govern who has access to the network. In contrast to public blockchain networks, validators on private blockchain networks are vetted by the network owner. They do not rely on anonymous nodes to validate transactions nor do they benefit from the network effect. Private blockchains can also go by the name of ‘consortium’ or ‘hybrid’ blockchains.

Inter-domain Optimization Trigger in a PCE-based Environment


Drawings

Brief Description:

illustrates an item 100 in accordance with one embodiment.

Detailed Description:

Figure 1 is a schematic block diagram of an exemplary computer network 100 comprising areas 104-110 interconnected by area 102 (a “backbone” area). area 102 shares area border routers (ABRs) with each area 104-110, namely, ABR1-2 with 104, ABR3-4 with area 106, ABR5 with area 108, and ABR6 with area 110. In addition, areas 104-110 share their area border routers ABR1-6 with backbone area 102. Areas 104-110 have exemplary intradomain routers A-D, respectively, while area 110 also has intradomain router E. Also, withinarea 106 are exemplary nodes (e.g., routers) n1-n4. Those skilled in the art will understand that any number of routers and nodes may be used in the areas, and that the viewshown herein is for simplicity. As used herein, an area is a collection of routers that share full network topology information with each other but not necessarily with routers outside the area. A collection of areas may be contained within a single autonomous system (AS). The term area as used herein also encompasses the term “level” which has a similar meaning for networks that employ IS-IS as their interior gateway protocol (IGP), in which case the area border routers ABR1-6 are embodied as level 1/level2 (L1L2) routers. These examples are merely representative. Areas and levels are generally referred to herein as “domains.” Also, the terms ABR, L1L2 router, and more generally, border routers, are used interchangeably herein. 

Data packets may be exchanged among the areas 102-110 using predefined network communication protocols such as the Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM) protocol, Frame Relay protocol, Internet Packet Exchange (IPX) protocol, etc. Routing information may be distributed among the routers of the areasusingpredetermined IGPs, such as conventional distance-vector protocols or, illustratively, link-state protocols, through the use of IGP Advertisements

Brief Description:

illustrates an exemplary router 200 in accordance with one embodiment.

Detailed Description:

 

Figure 2 is a schematic block diagram of an exemplary router 200 that may be advantageously used with the present invention as an intradomain router or a border router. The router comprises a plurality of network interfaces 202, a processor 204, and a memory 208 interconnected by a system bus 206. The network interfaces 202 contain the mechanical, electrical and signaling circuitry for communicating data over physical links coupled to the network 100. The network interfaces may be configured to transmit and/or receive datausing a variety of different communication protocols, including, inter alia, TCP/IP, UDP, ATM, synchronous optical networks (SONET), wireless protocols, Frame Relay, ethernet, Fiber Distributed Data Interface (FDDI), etc. 

 

The memory 208 comprises a plurality of storage locations that are addressable by the processor 204 and the network interfaces 202 for storing software programs and data structures associated with the present invention. The processor 204 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures, such as routing table 218. A router operating system 220, portions of which are typically resident in memory 208 and executed by the processor, functionally organizes the router by, inter alia, invoking network operations in support of software processes and/or services executing on the router. These software processes and/or services include PCC/PCE process 210, routing services 212, routing information Base (RIB 216), TE services 222, and RSVP services 214. It will be apparent to those skilled in the art that other processor and memory means, including various computer-readable media, may be used to store and execute program instructions pertaining to the inventive technique described herein. 

 

Routing services 212 contain computer executable instructions executed by processor 204 to perform functions provided by one or more routing protocols, such as IGP, e.g. OSPF and IS-IS. These functions may be configured to manage a forwarding information database (not shown) containing, e.g., data used to make forwarding decisions. TE services 222 contain computer executable instructions for operating TE functions in accordance with the present invention. Examples of Traffic Engineering are described in RFC 3209, RFC 3784, and RFC 3630 as incorporated above, and in RFC 3473, entitled, Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReSerVation Protocol-Traffic Engineering (RSVP-TE) extensions dated january 2003, which is hereby incorporated by reference in its entirety. RSVP services 214 contain computer executable instructions for implementing RSVP and processing RSVP messages in accordance with the present invention. RSVP is described in RFC 2205, entitled Resource ReSerVation Protocol (RSVP), and in RFC 3209, entitled RSVP-TE: extensions to RSVP for LSP Tunnels, both as incorporated above

 

Routing table 218 is illustratively resident in memory 208 and used to store routing information, including reachable destination address prefixes and associated attributes. These attributes include next-hop information used by exemplary router 200 to reach the destination prefixes and an associated metric (e.g., cost) of reaching the destination prefixes. The routing table 218 is illustratively maintained and managed by RIB 216. To that end, the RIB 216 maintains copies of routes (paths) provided by the routing protocols, such as IGP, in order to compute best paths/routes for installation into the routing table 218

 

Changes in the network topology may be communicated among routers 200 using a link-state protocol, such as the conventional OSPF and IS-IS protocols. Suppose, for example, that a communication link fails within an AS or a cost value associated with a network node changes. Once the change in the network’sstate is detected by one of the routers, that router may flood an IGP Advertisement communicating the change to the other routers in the AS. In this manner, each of the routers eventually “converges” to an identical view of the network topology

Brief Description:

illustrates a sequence 300 in accordance with one embodiment.

Detailed Description:

Figure 3 is a flowchart illustrating a sequence of steps for triggering optimization in accordance with the present invention. Sequence 300 starts at step 302 and continues to step 304 where the event PCE (e.g., ABR3) detects the event, such as, e.g., through an IGP Advertisement (e.g., for the appearance of link n1-n2). In step 306, the event PCE sends the event notification to the other PCEs (ABR1-6, except ABR3) through the use of IGP Extension Object 400. Notably, the event PCE may first decide whether it is beneficial to send the request

At step 308, the other PCEs receive the event notification. If configured to do so, in step 310 the receiving PCE may determine whether any LSRs in its domain have requested a TE-LSP that would benefit from the event. If not, the sequence ends at step 328. Otherwise, the sequence continues to step 312, where the PCE (a source PCE) sends the event notification to the LSRs in its domain. An LSR receives the notification in step 314, and if it is not a head-end node at step 316, the LSR ignores the notification (aside from perhaps forwarding it on to other LSRs in its domain), and the sequence ends at step 328. If the LSR is a head end node, it sends an optimization request (optionally jittered) to its local source PCE in step 318. Notably, the head-end node may first determine for which TE-LSPs, if any, to send an optimization request, as described above. 

The source PCE receives the optimization request in step 320, and in step 322 it checks if the TE-LSP in the request would benefit from optimization based on the event domain. If there would be no benefit in step 324, the source PCE rejects the request, sends an error message to the requesting head-end node, and the sequence ends in step 328. However, if at step 326 it is determined that the TE-LSP could benefit from optimization, the source PCE processes the received optimization request. Once the request is processed, the sequence ends in step 328. 


Parts List

100

item

102

area

104

area

106

area

108

area

110

area

200

exemplary router

202

network interfaces

204

processor

206

system bus

208

memory

210

PCC/PCE process

212

PCC/PCE process

214

PCC/PCE process

216

Routing Information Base (RIB)

218

routing table

220

router operating system

222

TE services

300

sequence

302

step

304

step

306

step

308

step

310

step

312

step

314

step

316

step

318

step

320

step

322

step

324

step

326

step

328

step

330

step


Terms/Definitions

head end node

network’s

LSRs

RSVP

plurality

GMPLS

software programs and data structures

requesting head-end node

wireless protocols

RSVP messages

conventional OSPF

present invention

different communication protocols

destination prefixes

system bus

network topology

signaling circuitry

ABRs

program instructions

support

other PCEs

their interior gateway protocol

routing services

terms

data packets

areas

reachable destination address prefixes

network node changes

flowchart

TE functions

example

event notification

Generalized Multi-Protocol Label Switching

number

benefit

exemplary nodes

“backbone” area

backbone area A

copies

networks

information

network communication protocols

cost

received optimization request

link

error message

Traffic Engineering

term “level”

step

Internet Packet Exchange

IS-IS protocols

routes

head-end node

incorporated above

using

LSP Tunnels

area A1-A

intradomain router E

associated attributes

event domain

software processes and/or services

FDDI

event

identical view

share full network topology information

state

data structures

level

notification

its entirety

figure

Routing Information Base (RIB)

protocols

software programs

associated metric

functions

Fiber Distributed Data Interface

intradomain router

inventive technique

paths

their area border routers

shown

steps

its local source PCE

Resource ReSerVation Protocol-Traffic Engineering

forwarding decisions

L1L2 router

TE services

areas and levels

routing information

PCC/PCE process

Frame Relay protocol

border routers

table

reference

installation

data

request

event PCE

within

network

IGP Advertisements

other processor

other LSRs

exemplary computer network

Frame Relay

exemplary router

source PCE

physical links

routers A-D

changes

predetermined IGPs

RSVP services

instructions

term area

area A

variety

RSVP-TE

various computer-readable media

memory

communication link

computer

january

Area A0 shares

other routers

cost value

single autonomous system

SONET

conventional distance-vector protocols

router operating system

routers

2 (L1L2) routers

processor

examples

routers and nodes

area

storage locations

Resource ReSerVation Protocol

addition

manner

IGP Advertisement

schematic block diagram

forwarding information database

similar meaning

best paths/routes

simplicity

routing protocols

area border routers

User Datagram Protocol

collection

necessary elements or logic

view

inter alia

memory means

attributes

synchronous optical networks

Asynchronous Transfer Mode

network operations

link-state protocols

router

link-state protocol

portions

network interfaces

routing table

appearance

sequence

optimization request

border router

ethernet

exemplary

receiving PCE

change

extensions

IGP Extension Object

its domain

next-hop information

TE-LSPs

Transmission Control Protocol/Internet Protocol

optimization

Intelligent Automated Assistant in Messaging Environment


Drawings

Brief Description:

Figure 1 is a block diagram illustrating a system and environment for implementing a digital assistant according to various examples

Detailed Description:

System and environment 

Figure 1 illustrates a block diagram of system 100 according to various examples. In some examples, system 100 can implement a digital assistant. The terms“digital assistant,” “virtual assistant,” “intelligent automated assistant,” or “automatic digital assistant” can refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent. For example, to act on an inferred user intent, the system can perform one or more of the following: identifying a task flow with steps and parameters designed to accomplish the inferred user intent, inputting specific requirements from the inferred user intent into the task flow; executing the task flow by invoking programs, methods, services, APIs, or the like; and generating output responses to the user in an audible (e.g., speech) and/or visual form

Specifically, a digital assistant can be capable of accepting a user request at least partially in the form of a natural language command, request, statement, narrative, and/or inquiry. Typically, the user request can seek either an informational answer or performance of a task by the digital assistant. A satisfactory response to the user request can be a provision of the requested informational answer, a performance of the requested task, or a combination of the two. For example, a user can ask the digital assistant a question, such as “Where am I right now?” Based on the user’scurrent location, the digital assistant can answer, “You are in Central Park near the west gate.” The user can also request the performance of a task, for example, “Please invite my friends to my girlfriend’sbirthday partynext week.” In response, the digital assistant can acknowledge the request by saying “Yes, right away,” and then send a suitable calendar invite on behalf of the user to each of the user’sfriends listed in the user’selectronic address book. During performance of a requested task, the digital assistant can sometimes interact with the user in a continuous dialogue involving multiple exchanges of information over an extended period of time. There are numerous other ways of interacting with a digital assistant to request information or performance of various tasks. In addition to providing verbal responses and taking programmed actions, the digital assistant can also provide responses in other visual or audio forms, e.g., as text, alerts, music, videos, animations, etc. 

As shown in Figure 1, in some examples, a digital assistant can be implemented according to a client-server model. The digital assistant can include client-side portion 102 (hereafter “DA clientDA client 102“) executed on user device 104 and server-side portion 106 (hereafter “DA server 106“) executed on server system 108. DA client 102 can communicate with DA server 106 through one or more networks 110. DA client 102 can provide client-side functionalities such as user-facing input and output processing and communication with DA server 106. DA server 106 can provide server-side functionalities for any number of DA clients 102 each residing on a respective user device 104.

In some examples, DA server 106 can include client-facing I/O interface 112, one or more processing modules 114, data and models 116, and I/O interface to external services 118. The client-facing I/O interface 112 can facilitate the client-facing input and output processing for DA server 106. One or more processing modules 114 can utilize data and models 116 to processspeech input and determine the user’sintent based on natural language input. Further, one or more processing modules 114perform task execution based on inferred user intent. In some examples, DA server 106 can communicate with external services 120 through network(s) 110 for task completion or information acquisition. I/O interface to external services 118 can facilitate such communications

User device 104 can be any suitable electronic device. For example, user devices can be a portable multifunctional device (e.g., device 202, described below with reference to Figure 2), a multifunctional device (e.g., device 400, described below with reference to FIG. 4), or a personal electronic device (e.g., device 600, described below with reference to FIG. 6A-B.) A portable multifunctional device can be, for example, a mobile telephone that also contains other functions, such as PDA and/or music player functions. Specific examples of portable multifunction devices can include the iPhone.RTM., iPod Touch.RTM., and iPad.RTM. devices from Apple Inc. of cupertino, calif. Other examples of portable multifunction devices can include, without limitation, laptop or tablet computers. Further, in some examples, user device 104 can be a non-portable multifunctional device. In particular, user device 104 can be a desktop computer, a game console, a television, or a television set-top box. In some examples, user device 104 can include a touch-sensitive surface (e.g., touch screen displays and/or touchpads). Further, user device 104 can optionally include one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick. Various examples of electronic devices, such as multifunctional devices, are described below in greater detail

Examples of communication network(s) 110 can include local area networks (LAN) and wide area networks (WAN), e.g., the internet. Communication network(s) 110 can be implemented using any known network protocol, including various wired or wireless protocols, such as, for example, ethernet, Universal Serial Bus (USB), FIREWIRE, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code divisionmultiple access (CDMA), time divisionmultiple access (TDMA), bluetooth, Wi-Fi, voice over Internet Protocol (VolP), Wi-MAX, or any other suitable communication protocol

Server system 108 can be implemented on one or more standalone data processing apparatus or a distributed network of computers. In some examples, server system 108 can also employ various virtual devices and/or services of third-party service providers (e.g., third-party cloud service providers) to provide the underlying computing resources and/or infrastructure resources of server system 108

In some examples, user device 104 can communicate with DA server 106 via second user device 122. Second user device 122 can be similar or identical to user device 104. For example, second user device 122 can be similar to devices 200, 400, or 600 described below with reference to FIGS. 2A, 4, and 6A-B. User device 104 can be configured to communicatively couple to second user device 122 via a direct communication connection, such as bluetooth, NFC, BTLE, or the like, or via a wired or wireless network, such as a local Wi-Fi network. In some examples, second user device 122 can be configured to act as a proxy between user device 104 and DA server 106. For example, DA client 102 of user device 104 can be configured to transmit information (e.g., a user request received at user device 104) to DA server 106 via second user device 122. DA server 106 can process the information and return relevant data (e.g., data content responsive to the user request) to user device 104 via second user device 122

In some examples, user device 104 can be configured to communicate abbreviated requests for data to second user device 122 to reduce the amount of information transmitted from user device 104. Second user device 122 can be configured to determine supplemental information to add to the abbreviated request to generate a complete request to transmit to DA server 106. This system architecture can advantageously allow user device 104 having limited communication capabilities and/or limited battery power (e.g., a watch or a similar compact electronic device) to access services provided by DA server 106 by using second user device 122, having greater communication capabilities and/or battery power (e.g., a mobile phone, laptop computer, tablet computer, or the like), as a proxy to DA server 106. While only two user devices 104 and 122 are shown in Figure 1, it should be appreciated that system 100 can include any number and type of user devices configured in this proxy configuration to communicate with DA server system 106. 

Although the digital assistant shown in Figure 1 can include both a client-side portion (e.g., DA client 102) and a server-side portion (e.g., DA server 106), in some examples, the functions of a digital assistant can be implemented as a standalone application installed on a user device. In addition, the divisions of functionalities between the client and server portions of the digital assistant can vary in different implementations. For instance, in some examples, the DA client can be a thin-client that provides only user-facing inputuser-facing input and output processingfunctions, and delegates all other functionalities of the digital assistant to a backend server

Brief Description:

Figure 2 is a block diagram illustrating a portable multifunction device implementing the client-side portion of a digital assistant according to various examples

Detailed Description:

2. Electronic devices 

Attention is now directed toward embodiments of electronic devices for implementing the client-side portion of a digital assistant. Figure 2 is a block diagram illustratingportable multifunction device 202 with touch-sensitive display system 278 in accordance with some embodiments. Touch-sensitive display 278 is sometimes called a “touch screen” for convenience and is sometimes known as or called a “touch-sensitive display system.” Device 202 includes memory 202 (which optionally includes one or more computer-readable storage mediums), memory controller 318, one or more processing units (CPUs) 320, peripherals interface 322, RF circuitry 300, audio circuitry 306, speaker 310, microphone 308, input/output (I/O) subsystem 298, other input control devices 282, and external port 314. Device 202 optionally includes one or more optical sensors 296. Device 202 optionally includes one or more contactintensitysensors 294 for detecting intensity of contacts on device 202 (e.g., a touch-sensitive surface such as touch-sensitive display system 278 of device 202). device 202 optionally includes one or more tactile output generators 290 for generating tactile outputs on device 202 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 278 of device 202 or touchpad 455 of device 400). These components optionally communicate over one or more communication buses or signal lines 203. 

As used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. The intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. For example, one or more forcesensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. Similarly, a pressure-sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). In some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). Using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for displaying affordances (e.g., on a touch-sensitive display) and/or receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button). 

As used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user’s sense of touch. For example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user’shand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. In some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user’smovements. As another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. While such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. Thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user

It should be appreciated that device 202 is only one example of a portable multifunction device, and that device 202 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. The various components shown in Figure 2 are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits

Memory 202 may include one or more computer-readable storage mediums. The computer-readable storage mediums may be tangible and non-transitory. Memory 202 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Memory controller 318 may control access to memory 202 by other components of device 202. 

In some examples, a non-transitory computer-readable storage medium of memory 202 can be used to storeinstructions (e.g., for performing aspects of processes described below) for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In other examples, the instructions (e.g., for performing aspects of the processes described below) can be stored on a non-transitory computer-readable storage medium (not shown) of the server system 108 or can be divided between the non-transitory computer-readable storage medium of memory 202 and the non-transitory computer-readable storage medium of server system 108. In the context of this document, a “non-transitory computer-readable storage medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device

Peripherals interface 322 can be used to couple input and output peripherals of the device to CPU 320 and memory 202. The one or more processors 320 run or execute various software programs and/or sets of instructions stored in memory 202 to perform various functions for device 202 and to processdata. In some embodiments, peripherals interface 322, CPU 320, and memory controller 318 may be implemented on a single chip, such as chip 316. In some other embodiments, they may be implemented on separate chips

RF (radio frequency) circuitry 300 receives and sends RF signals, also called electromagnetic signals. RF circuitry 300 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. RF circuitry 300 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. RF circuitry 300 optionally communicates with networks, such as the internet, also referred to as the World Wide Web (WWW), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other devices by wireless communication. The RF circuitry 300 optionally includes well-known circuitry for detecting near field communication (NFC) fields, such as by a short-range communication radio. The wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), high-speed uplink packet access (HSUPA), evolution, Data-Only (EV-DO), HSPA, HSPA+, Dual-Cell HSPA (DC-HSPDA), long term evolution (LTE), near field communication (NFC), wideband code divisionmultiple access (W-CDMA), code divisionmultiple access (CDMA), time divisionmultiple access (TDMA), bluetooth, Bluetooth Low Energy (BTLE), Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, and/or IEEE 802.11ac), voice over Internet Protocol (VolP), Wi-MAX, a protocol for e mail (e.g., internet message access protocol (IMAP) and/or post office protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), Session Initiation Protocol for instant messaging and Presence Leveraging Extensions (SIMPLE), instant messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document

Audio circuitry 306, speaker 310, and microphone 308 provide an audio interface between a user and device 202. Audio circuitry 306 receives audio data from peripherals interface 322, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 310. Speaker 310 converts the electrical signal to human-audible sound waves. Audio circuitry 306 also receives electrical signals converted by microphone 308 from sound waves. Audio circuitry 306 converts the electrical signal to audio data and transmits the audio data to peripherals interface 322 for processing. Audio data may be retrieved from and/or transmitted to memory 202 and/or RF circuitry 300 by peripherals interface 322. In some embodiments, audio circuitry 306 also includes a headset jack (e.g., 312, FIG. 3). The headset jack provides an interface between audio circuitry 306 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). 

I/O subsystem 298 couplesinput/output peripherals on device 202, such as touch screen 278 and other input control devices 282, to peripherals interface 322. I/O subsystem 298 optionally includes display controller 280, optical sensor controller 292, intensity sensor controller 286, haptic feedback controller 284, and one or more input controllers 288 for other input or control devices. The one or more input controllers 288 receive/send electrical signals from/to other input control devices 282. The other input control devices 282 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. In some alternate embodiments, input controller(s) 288 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a USB port, and a pointer device such as a mouse. The one or more buttons (e.g., 308, FIG. 3) optionally include an up/down button for volume control of speaker 310 and/or microphone 308. The one or more buttons optionally include a push button (e.g., 306, FIG. 3). 

A quick press of the push button may disengage a lock of touch screen 278 or begin a process that uses gestures on the touch screen to unlock the device, as described in U.S. patent application Ser. No. 11/322,549, “Unlocking a device by Performing Gestures on an Unlock Image,” filed Dec. 23, 2005, U.S. Pat. No. 7,657,849, which is hereby incorporated by reference in its entirety. A longer press of the push button (e.g., 306) may turn power to device 202 on or off. The user may be able to customize a functionality of one or more of the buttons. Touch screen 278 is used to implement virtual or soft buttons and one or more soft keyboards

Touch-sensitive display 278 provides an input interface and an output interface between the device and a user. Display controller 280 receives and/or sends electrical signals from/to touch screen 278. Touch screen 278 displays visual output to the user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output may correspond to user-interface objects

Touch screen 278 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. Touch screen 278 and display controller 280 (along with any associated modules and/or sets of instructions in memory 202) detect contact (and any movement or breaking of the contact) on touch screen 278 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 278. In an exemplary embodiment, a point of contact between touch screen 278 and the user corresponds to a finger of the user

Touch screen 278 may use LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies may be used in other embodiments. Touch screen 278 and display controller 280 may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 278. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that found in the iPhone.RTM. and iPod Touch.RTM. from Apple Inc. of cupertino, calif

A touch-sensitive display in some embodiments of touch screen 278 may be analogous to the multi-touch sensitive touchpads described in the following U.S. Pat. No.: 6,323,846 (westerman et al.), U.S. Pat. No. 6,570,557 (westerman et al.), and/or U.S. Pat. No. 6,677,932 (westerman), and/or U.S. Patent Publication 2002/0015024A1, each of which is hereby incorporated by reference in its entirety. However, touch screen 278 displays visual output from device 202, whereas touch-sensitive touchpads do not provide visual output

A touch-sensitive display in some embodiments of touch screen 278 may be as described in the following applications: (1) U.S. patent application Ser. No. 11/381,313, “Multipoint Touch Surface Controller,” filed May 2, 2006; (2) U.S. patent application Ser. No. 10/840,862, “Multipoint Touchscreen,” filed May 6, 2004; (3) U.S. patent application Ser. No. 10/903,964, “Gestures For Touch Sensitive Input Devices,” filed Jul. 30, 2004; (4) U.S. patent application No. 11/048,264, “Gestures For Touch Sensitive Input Devices,” filed Jan. 31, 2005; (5) U.S. patent application Ser. No. 11/038,590, “Mode-Based Graphical User Interfaces For Touch Sensitive Input Devices,” filed Jan. 18, 2005; (6) U.S. patent application Ser. No. 11/228,758, “Virtual Input Device Placement On A Touch Screen User Interface,” filed Sep. 16, 2005; (7) U.S. patent application Ser. No. 11/228,700, “Operation Of A computer With A Touch Screen Interface,” filed Sep. 16, 2005; (8) U.S. patent application Ser. No. 11/228,737, “Activating Virtual Keys Of A Touch-Screen Virtual Keyboard,” filed Sep. 16, 2005; and (9) U.S. patent application Ser. No. 11/367,749, “Multi-Functional Hand-Held Device,” filed Mar. 3, 2006. All of these applications are incorporated by reference herein in their entirety

Touch screen 278 may have a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of approximately 160 dpi. The user may make contact with touch screen 278 using any suitable object or appendage, such as a stylus, a finger, and so forth. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus-based input due to the larger area of contact of a finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user

In some embodiments, in addition to the touch screen, device 202 may include a touchpad (not shown) for activating or deactivating particular functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not displayvisual output. The touchpad may be a touch-sensitive surface that is separate from touch screen 278 or an extension of the touch-sensitive surface formed by the touch screen

Device 202 also includes power system 312 for powering the various components. Power system 312 may include a power management system, one or more power sources (e.g., battery, alternating current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in portable devices

Device 202 may also include one or more optical sensors 296. Figure 2 shows an optical sensor coupled to optical sensor controller 292 in I/O subsystem 298. Optical sensor 296 may include charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. Optical sensor 296 receives light from the environment, projected through one or more lenses, and converts the light to data representing an image. In conjunction with imaging module 236 (also called a camera module), optical sensor 296 may capture still images or video. In some embodiments, an optical sensor is located on the back of device 202, opposite touch screen display 278 on the front of the device so that the touch screen display may be used as a viewfinder for still and/or video image acquisition. In some embodiments, an optical sensor is located on the front of the device so that the user’simage may be obtained for video conferencing while the userviews the other video conference participants on the touch screen display. In some embodiments, the position of optical sensor 296 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor 296 may be used along with the touch screen display for both video conferencing and still and/or video image acquisition

Device 202 optionally also includes one or more contactintensitysensors 294. Figure 2 shows a contact intensity sensor coupled to intensity sensor controller 286 in I/O subsystem 298. Contact intensity sensor 294 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). contact intensity sensor 294 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 278). In some embodiments, at least one contact intensity sensor is located on the back of device 202, opposite touch screen display 278, which is located on the front of device 202. 

Device 202 may also include one or more proximity sensors 304. Figure 2 shows proximity sensor 304 coupled to peripherals interface 322. Alternately, proximity sensor 304 may be coupled to input controller 288 in I/O subsystem 298. Proximity sensor 304 may perform as described in U.S. patent application Ser. No. 11/241,839, “Proximity Detector In Handheld Device”; 11/240,788, “Proximity Detector In Handheld Device”; 11/620,702, “Using Ambient Light Sensor To Augment Proximity Sensor Output”; 11/586,862, “Automated Response To And Sensing Of User Activity In Portable Devices”; and 11/638,251, “Methods And Systems For Automatic Configuration Of peripherals,” which are hereby incorporated by reference in their entirety. In some embodiments, the proximity sensor turns off and disables touch screen 278 when the multifunction device is placed near the user’s ear (e.g., when the user is making a phone call). 

device 202 optionally also includes one or more tactile output generators 290. Figure 2 shows a tactile output generator coupled to haptic feedback controller 284 in I/O subsystem 298. Tactile output generator 290 optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). contact intensity sensor 294 receives tactile feedback generation instructions from haptic feedback module 260 and generates tactile outputs on device 202 that are capable of being sensed by a user of device 202. In some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 278) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 202) or laterally (e.g., back and forth in the same plane as a surface of device 202). In some embodiments, at least one tactile output generator sensor is located on the back of device 202, opposite touch screen display 278, which is located on the front of device 202. 

Device 202 may also include one or more accelerometers 302. Figure 2 shows accelerometer 302 coupled to peripherals interface 322. Alternately, accelerometer 302 may be coupled to an input controller 288 in I/O subsystem 298. Accelerometer 302 may perform as described in U.S. Patent Publication No. 20050190059, “Acceleration-based Theft Detection System for Portable Electronic Devices,” and U.S. Patent Publication No. 20060017692, “Methods And Apparatuses For Operating A Portable Device Based On An accelerometer,” both of which are incorporated by reference herein in their entirety. In some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. Device 202 optionally includes, in addition to accelerometer(s) 302, a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 202. 

In some embodiments, the software components stored in memory 202 include operating system 208, communication module (or set of instructions) 276, contact/motion module (or set of instructions) 270, graphics module (or set of instructions) 266, text input module (or set of instructions) 258, Global Positioning System (GPS) module (or set of instructions) 252, Digital Assistant Client Module 248, and applications (or sets of instructions) 226. Further, memory 202 can storedata and models, such as user data and models 244. Furthermore, in some embodiments, memory 202 (Figure 2) or 470 (FIG. 4) storesdevice/global internal state 232, as shown in FIGS. 2A and 4. Device/global internal state 232 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display 278; sensor state, including information obtained from the device’svarious sensors and inputcontrol devices 282; and location information concerning the device’slocation and/or attitude

Operating system 208 (e.g., darwin, RTXC, LINUX, UNIX, OS X, iOS, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardwarehardware and softwarecomponents

Communication module 276 facilitates communication with other devices over one or more external ports 314 and also includes various software components for handling data received by RF circuitry 300 and/or external port 314. External port 314 (e.g., Universal Serial Bus (USB), FIREWIRE, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as, or similar to and/or compatible with, the 30-pin connector used on iPod.RTM. (trademark of Apple Inc.) Devices

Contact/motion module 270 optionally detects contact with touch screen 278 (in conjunction with display controller 280) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). contact/motion module 270 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). contact/motion module 270 receives contact data from the touch-sensitive surface. Determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. These operations are, optionally, applied to single contacts (e.g., one fingercontacts) or to multiple simultaneous contacts (e.g., “multitouch”/multiple finger contacts). In some embodiments, contact/motion module 270 and display controller 280 detect contact on a touchpad

In some embodiments, contact/motion module 270 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). In some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 202). For example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). 

contact/motion module 270 optionally detects a gesture input by a user. Different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). Thus, a gesture is, optionally, detected by detecting a particular contact pattern. For example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event

Graphics module 266 includes various known software components for rendering and displaying graphics on touch screen 278 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. As used herein, the term “graphics” includes any object that can be displayed to a user, including ,without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. 

In some embodiments, graphics module 266storesdata representing graphics to be used. Each graphic is, optionally, assigned a corresponding code. Graphics module 266 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 280. 

Haptic feedback module 260 includes various software components for generating instructions used by tactile output generator(s) 290 to produce tactile outputs at one or more locations on device 202 in response to user interactions with device 202. 

Text input module 258, which may be a component of graphics module 266, provides soft keyboards for entering text in various applications (e.g., contacts 240, e mail 212, IM 241, browser 274, and any other application that needs text input). 

GPS module 252determines the location of the device and provides this information for use in various applications (e.g., to telephone 243 for use in location-based dialing; to camera 236 as picture/video metadata; and to applications that provide location-based services such as weather widgets, local yellow page widgets, and map/navigation widgets). 

Digital Assistant Client Module 248 can include various client-side digital assistant instructions to provide the client-side functionalities of the digital assistant. For example, Digital Assistant Client Module 248 can be capable of accepting voice input (e.g., speech input), text input, touch input, and/or gestural input through various user interfaces (e.g., microphone 308, accelerometer(s) 302, touch-sensitive display system 278, optical sensor(s) 296, other input control devices 282, etc.) of portable multifunction device 202. Digital Assistant Client Module 248 can also be capable of providing output in audio (e.g., speech output), visual, and/or tactile forms through various output interfaces (e.g., speaker 310, touch-sensitive display system 278, tactile output generator(s) 290, etc.) of portable multifunction device 202. For example, output can be provided as voice, sound, alerts, text messages, menus, graphics, videos, animations, vibrations, and/or combinations of two or more of the above. During operation, Digital Assistant Client Module 248 can communicate with DA server 106 using RF circuitry 300

User data and models 244 can include various data associated with the user (e.g., user-specific vocabulary data, user preference data, user-specified name pronunciations, data from the user’selectronic address book, to-do lists, shopping lists, etc.) to provide the client-side functionalities of the digital assistant. Further, user data and models 244 can includes various models (e.g., speech recognition models, statistical language models, natural language processing models, ontology, task flow models, service models, etc.) for processing user input and determining user intent

In some examples, Digital Assistant Client Module 248 can utilize the various sensors, subsystems, and peripheral devices of portable multifunction device 202 to gather additional information from the surrounding environment of the portable multifunction device 202 to establish a context associated with a user, the current user interaction, and/or the current user input. In some examples, Digital Assistant Client Module 248 can provide the contextual information or a subset thereof with the user input to DA server 106 to help infer the user’sintent. In some examples, the digital assistant can also use the contextual information to determine how to prepare and deliver outputs to the user. Contextual information can be referred to as context data

In some examples, the contextual information that accompanies the user input can include sensor information, e.g., lighting, ambient noise, ambient temperature, images or videos of the surrounding environment, etc. In some examples, the contextual information can also include the physical state of the device, e.g., devicedevice orientation, device location, device temperature, power level, speed, acceleration, motion patterns, cellular signals strength, etc. In some examples, information related to the software state of DA server 106, e.g., running processes, installed programs, past and present network activities, background services, error logs, resources usage, etc., and of portable multifunction device 202 can be provided to DA server 106 as contextual information associated with a user input

In some examples, the Digital Assistant Client Module 248 can selectively provide information (e.g., user data 244) stored on the portable multifunction device 202 in response to requests from DA server 106. In some examples, Digital Assistant Client Module 248 can also elicit additional input from the user via a natural language dialogue or other user interfaces upon request by DA server 106. Digital Assistant Client Module 248 can pass the additional input to DA server 106 to help DA server 106 in intent deduction and/or fulfillment of the user’sintent expressed in the user request

A more detailed description of a digital assistant is described below with reference to FIGS. 7A-C. It should be recognized that Digital Assistant Client Module 248 can include any number of the sub-modules of digital assistant module 726 described below. 

Applications 226 may include the following modules (or sets of instructions), or a subset or superset thereof: [0087] contacts module 240 (sometimes called an address book or contact list); [0088] telephone module 243; [0089] video conference module 228; [0090] e-mail client module 212; [0091] instant messaging (IM) module 224; [0092] workout support module 242; [0093] camera module 236 for still and/or video images; [0094] image management module 230; [0095] video player module; [0096] music player module; [0097] browser module 274; [0098] calendar module 268; [0099] widget modules 262, which may include one or more of: weather widget 256, stocks widget 254, calculator widget 250, alarm clock widget 246, dictionary widget 238, and other widgets obtained by the user, as well as user-created widgets 220; [0100] widget creator module 210 for making user-created widgets 220; [0101] search module 272; [0102] video and music player module 218, which merges video player module and music player module; [0103] notes module 222; [0104] map module 214; and/or [0105] online video module 264

Examples of other applications 226 that may be stored in memory 206 include other word processing applications, other image editing applications, drawing applications, presentation applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, contacts module 240 may be used to manage an address book or contact list (e.g., stored in application internal state 332 of contacts module 240 in memory 206or memory 470), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s), e-mail address(es), physical address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 243, video conference module 228, e-mail 212, or IM 224; and so forth. 

In conjunction with RF circuitry 300, audio circuitry 306, speaker 310, microphone 308, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, telephone module 243 may be used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 240, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. As noted above, the wireless communication may use any of a plurality of communications standards, protocols, and technologies

In conjunction with RF circuitry 300, audio circuitry 306, speaker 310, microphone 308, touch screen 278, display controller 280, optical sensor 296, optical sensor controller 292, contact/motion module 270, graphics module 266, text input module 258, contacts module 240, and telephone module 243, video conference module 228 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, e-mail client module 212 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. In conjunction with image management module 230, e-mail client module 212 makes it very easy to create and send e-mails with still or video images taken with camera module 236

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, the instant messaging module 224 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a Short Message Service (SMS) or Multimedia Message Service (MMS) protocol for telephony-based instant messages or using XMPP, SIMPLE, or IMPS for internet-based instant messages), to receive instant messages, and to view received instant messages. In some embodiments, transmitted and/or received instant messages may include graphics, photos, audio files, video files and/or other attachments as are supported in an MMS and/or an Enhanced Messaging Service (EMS). As used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS). 

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, GPS module 252, map module 214, and music player module, workout support module 242 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; select and play music for a workout; and display, store, and transmit workout data

In conjunction with touch screen 278, display controller 280, optical sensor(s) 296, optical sensor controller 292, contact/motion module 270, graphics module 266, and image management module 230, camera module 236 includes executable instructions to capture still images or video (including a video stream) and store them into memory 206, modify characteristics of a still image or video, or delete a still image or video from memory 206

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, and camera module 236, image management module 230 includes executable instructions to arrange, modify (e.g., edit), or otherwise manipulate, label, delete, present (e.g., in a digital slide show or album), and store still and/or video images

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, browser module 274 includes executable instructions to browse the internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, e-mail client module 212, and browser module 274, calendar module 268 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do lists, etc.) in accordance with user instructions

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, and browser module 274, widget modules 262 are mini-applications that may be downloaded and used by a user (e.g., weather widget 262-1, stocks widget 254, calculator widget 250, alarm clock widget 246, and dictionary widget 238) or created by the user (e.g., user-created widget 220). In some embodiments, a widget includes an HTML (Hypertext Markup Language) file, a CSS (Cascading Style Sheets) file, and a JavaScript file. In some embodiments, a widget includes an XML (Extensible Markup Language) file and a JavaScript file (e.g., Yahoo! widgets). 

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, and browser module 274, the widget creator module 210 may be used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget). 

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, search module 272 includes executable instructions to search for text, music, sound, image, video, and/or other files in memory 206 that match one or more search criteria (e.g., one or more user-specified searchterms) in accordance with user instructions

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, audio circuitry 306, speaker 310, RF circuitry 300, and browser module 274, video and music player module 218 includes executable instructions that allow the user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, and executable instructions to display, present, or otherwise play backvideos (e.g., on touch screen 278 or on an external, connected display via external port 314). In some embodiments, device 202 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.). 

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, and text input module 258, notes module 222 includes executable instructions to create and manage notes, to-do lists, and the like in accordance with user instructions

In conjunction with RF circuitry 300, touch screen 278, display controller 280, contact/motion module 270, graphics module 266, text input module 258, GPS module 252, and browser module 274, map module 214 may be used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions

In conjunction with touch screen 278, display controller 280, contact/motion module 270, graphics module 266, audio circuitry 306, speaker 310, RF circuitry 300, text input module 258, e-mail client module 212, and browser module 274, online video module 264 includes instructions that allow the user to access, browse, receive (e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 314), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as H.264. In some embodiments, instant messaging module 224, rather than e-mail client module 212, is used to send a link to a particular online video. Additional description of the online video application can be found in U.S. Provisional Patent Application No. 60/936,562, “portable multifunction device, method, and Graphical User Interface for Playing online videos,” filed Jun. 20, 2007, and U.S. patent application Ser. No. 11/968,067, “portable multifunction device, method, and Graphical User Interface for Playing online videos,” filed Dec. 31, 2007, the contents of which are hereby incorporated by reference in their entirety

Each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise rearranged in various embodiments. For example, video player module may be combined with music player module into a single module (e.g., video and music player module 218, Figure 2). In some embodiments, memory 206 may store a subset of the modules and data structures identified above. Furthermore, memory 206 may storeadditional modulesmodules and data structures not described above

In some embodiments, device 202 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. By using a touch screen and/or a touchpad as the primary input control device for operation of device 202, the number of physical input control devices (such as push buttons, dials, and the like) on device 202 may be reduced. 

The predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates device 202 to a main, home, or root menu from any user interface that is displayed on device 202. In such embodiments, a “menu button” is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad

Brief Description:

Figure 3 is a block diagram illustratingexemplary components for event handling according to various examples

Detailed Description:

Figure 3 is a block diagram illustratingexemplary components for event handling in accordance with some embodiments. In some embodiments, memory 206 (Figure 2) or 470 (FIG. 4) includes event sorter 338 (e.g., in operating system 208) and a respective application 226

event sorter 338 receives event information and determines the application 226 and application view 302 of application 226 to which to deliver the event information. Event sorter 338 includes event monitor 308 and event dispatcher module 322. In some embodiments, application 226 includes application internal state 332, which indicates the current application view(s) displayed on touch-sensitive display 278 when the application is active or executing. In some embodiments, devicedevice/global internal state 232 is used by event sorter 338 to determine which application(s) is (are) currently active, and application internal state 332 is used by event sorter 338 to determine application views 302 to which to deliver event information

In some embodiments, application internal state 332 includes additional information, such as one or more of: resume information to be used when application 226 resumes execution, user interface state information that indicates information being displayed or that is ready for display by application 226, a state queue for enabling the user to go back to a prior state or view of application 226, and a redo/undo queue of previous actions taken by the user

Event monitor 308 receives event information from peripherals interface 322. Event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 278, as part of a multi-touch gesture). peripherals interface 322 transmits information it receives from I/O subsystem 298 or a sensor, such as proximity sensor 304, accelerometer(s) 302, and/or microphone 308 (through audio circuitry 306). information that peripherals interface 322 receives from I/O subsystem 298 includes information from touch-sensitive display 278 or a touch-sensitive surface

In some embodiments, event monitor 308 sends requests to the peripherals interface 322 at predetermined intervals. In response, peripherals interface 322 transmits event information. In other embodiments, peripherals interface 322 transmits event information only when there is a significant event (e.g., receiving an inputabove a predetermined noise threshold and/or for more than a predetermined duration). 

In some embodiments, event sorter 338 also includes a hit view determination module 310 and/or an active event recognizer determination module 316

Hit view determination module 310 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 278 displays more than one view. Views are made up of controls and other elements that a user can see on the display

Another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. The application views (of a respective application) in which a touch is detected may correspond to programmatic levels within a programmatic or view hierarchy of the application. For example, the lowest level view in which a touch is detected may be called the hit view, and the set of events that are recognized as proper inputs may be determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture

Hit view determination module 310 receives information related to sub events of a touch-based gesture. When an application has multiple views organized in a hierarchy, hit view determination module 310 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. In most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). Once the hit view is identified by the hit view determination module 310, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view

Active event recognizer determination module 316determines which view or views within a view hierarchy should receive a particular sequence of sub-events. In some embodiments, active event recognizer determination module 316determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 316determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. In other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views

Event dispatcher module 322 dispatches the event information to an event recognizer (e.g., event recognizer 336). In embodiments including active event recognizer determination module 316, event dispatcher module 322 delivers the event information to an event recognizer determined by active event recognizer determination module 316. In some embodiments, event dispatcher module 322stores in an event queue the event information, which is retrieved by a respective event receiver 304. 

In some embodiments, operating system 208 includes event sorter 338. Alternatively, application 226 includes event sorter 338. In yet other embodiments, event sorter 338 is a stand-alone module, or a part of another module stored in memory 206, such as contact/motion module 338. 

In some embodiments, application 226 includes a plurality of event handlers 334 and one or more application views 302, each of which includes instructions for handling touchevents that occur within a respective view of the application’suser interface. Each application view 302 of the application 226 includes one or more event recognizers 336. Typically, a respective application view 302 includes a plurality of event recognizers 336. In other embodiments, one or more of event recognizers 336 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 226 inherits methods and other properties. In some embodiments, a respective event handler 334 includes one or more of: data updater 306, object updater 314, GUI updater 330, and/or event data 328 received from event sorter 338. Event handler 334 may utilize or call data updater 306, object updater 314, or GUI updater 330 to update the application internal state 332. Alternatively, one or more of the application views 302 include one or more respective event handlers 334. Also, in some embodiments, one or more of data updater 306, object updater 314, and GUI updater 330 are included in a respective application view 302. 

A respective event recognizer 336 receives event information (e.g., event data 328) from event sorter 338 and identifies an event from the event information. Event recognizer 336 includes event receiver 304 and event comparator 312. In some embodiments, event recognizer 336 also includes at least a subset of: metadata 318, and event delivery instructions 226 (which may include sub-event delivery instructions). 

event receiver 304 receives event information from event sorter 338. The event information includes information about a sub-event, for example, a touch or a touch movement. Depending on the sub-event, the event information also includes additional information, such as location of the sub-event. When the sub-event concerns motion of a touch, the event information may also include speed and direction of the sub-event. In some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device

Event comparator 312 compares the event information to predefined eventevent or sub-eventsub-event definitions and, based on the comparison, determines an event or sub event, or determines or updates the state of an event or sub-event. In some embodiments, event comparator 312 includes event definitions 286. Event definitions 286 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (324), event 2 (320), and others. In some embodiments, sub-events in an event (287) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. In one example, the definition for event 1 (324) is a double tap on a displayed object. The double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. In another example, the definition for event 2 (320) is a dragging on a displayed object. The dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 278, and liftoff of the touch (touch end). In some embodiments, the event also includes information for one or more associated event handlers 334. 

In some embodiments, event definition 287 includes a definition of an event for a respective user-interface object. In some embodiments, event comparator 312 performs a hit test to determine which user-interface object is associated with a sub-event. For example, in an application view in which three user-interface objects are displayed on touch-sensitive display 278, when a touch is detected on touch-sensitive display 278, event comparator 312 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). If each displayed object is associated with a respective event handler 334, the event comparator uses the result of the hit test to determine which event handler 334 should be activated. For example, event comparator 312 selects an event handler associated with the sub-event and the object triggering the hit test

In some embodiments, the definition for a respective event (287) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’sevent type

When a respective event recognizer 336 determines that the series of sub-events do not match any of the events in event definitions 286, the respective event recognizer 336 enters an event impossible, event failed, or event ended state, after which it disregards subsequent sub-events of the touch-based gesture. In this situation, other event recognizers, if any, that remain active for the hit view continue to track and processsub-events of an ongoing touch-based gesture

In some embodiments, a respective event recognizer 336 includes metadata 318 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. In some embodiments, metadata 318 includes configurable properties, flags, and/or lists that indicate how event recognizers may interact, or are enabled to interact, with one another. In some embodiments, metadata 318 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy

In some embodiments, a respective event recognizer 336 activates event handler 334 associated with an event when one or more particular sub-events of an event are recognized. In some embodiments, a respective event recognizer 336 delivers event information associated with the event to event handler 334. Activating an event handler 334 is distinct from sending (and deferred sending) sub-events to a respective hit view. In some embodiments, event recognizer 336 throws a flag associated with the recognized event, and event handler 334 associated with the flag catches the flag and performs a predefined process

In some embodiments, event delivery instructions 226 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. Instead, the sub-event delivery instructionsdeliver event information to event handlers associated with the series of sub-events or to actively involved views. Event handlers associated with the series of sub-events or with actively involved views receive the event information and perform a predetermined process

In some embodiments, data updater 306 creates and updates data used in application 226. For example, data updater 306 updates the telephone number used in contacts module 240, or stores a video file used in video player module. In some embodiments, object updater 314 creates and updates objects used in application 226. For example, object updater 314 creates a new user-interface object or updates the position of a user-interface object. GUI updater 330 updates the GUI. For example, GUI updater 330 prepares display information and sends it to graphics module 266 for display on a touch-sensitive display

In some embodiments, event handler(s) 334 includes or has access to data updater 306, object updater 314, and GUI updater 330. In some embodiments, data updater 306, object updater 314, and GUI updater 330 are included in a single module of a respective application 226 or application view 302. In other embodiments, they are included in two or more software modules

It shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 200 with input devices, not all of which are initiated on touch screens. For example, mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; contact movements such as taps, drags, scrolls, etc. on touchpads; pen stylus inputs; movement of the device; oral instructions; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. 


Parts List

100

system

102

DA client

104

user device

106

DA server

108

server system

110

network(s)

112

client-facing I/O interface

114

processing modules

116

data and models

118

I/O interface to external services

120

external services

122

second user device

202

portable multifunction device

204

item

206

memory

208

operating system

210

widget creator module

212

e-mail client module

214

map module

216

item

218

video and music player module

220

user-created widgets

222

notes module

224

instant messaging module

226

applications

228

video conference module

230

image management module

232

device/global internal state

234

telephone module

236

camera module

238

dictionary widget

240

contacts module

242

workout support module

244

user data and models

246

alarm clock widget

248

Digital Assistant Client Module

250

calculator widget

252

GPS module

254

stocks widget

256

weather widget

258

text input module

260

haptic feedback module

262

widget modules

264

online video module

266

graphics module

268

calendar module

270

contact/motion module

272

search module

274

browser module

276

communication module

278

touch-sensitive display system

280

display contoller

282

other input control devices

284

haptic feedback controller

286

intensity sensor controller

288

other input controller

290

tactile output generator

292

optical sensor controller

294

contact intensity sensor

296

optical sensor(s)

298

item

300

RF circuitry

302

accelerometer(s)

304

proximity sensor

306

audio circuitry

308

microphone

310

speaker

312

power system

314

external port

316

chip

318

controller

320

processor(s)

322

peripherals interface

324

signal line

326

event delivery

328

event data

330

GUI updater

332

application internal state

334

event handler

336

event recognizer

338

event sorter


Terms/Definitions

“multitouch”/multiple finger contacts

other properties

same plane

gesture input

aspect

spoken and/or textual form

rough finger-based input

input

iPod

IEEE 802.11b, IEEE 802.11g, IEEE 802.11n

HSDPA

lowest view

transparency

“non-transitory computer-readable storage medium”

imaging module

online video application

extended period

various client-side digital assistant instructions

widget

pointer device

plural forms

video files

RTXC

Central Park

battery

graphics module

telephone

click wheels

pen stylus inputs

interface

applications

events

touch screen

contextual history

location-based dialing

application(s)

touch-sensitive surface and/or changes

technology

exemplary components

double tap

other display technologies

longer press

U.S. patent application No. 11/048

stylus-based input

user-specified portion

“Multipoint Touchscreen

units

other functions

integers

westerman et al

display information

viewfinder

“Virtual Input Device Placement

values

approximately 160 dpi

audio interface

affordances

programmatic hierarchy

separate and different inputs

cellular telephone network

sub-event concerns motion

particular contact pattern

short-range communication radio

particular sensory perception

event type

predefined set

two or more software modules

main, home

power management

dials

executable instructions

time

state queue

various tasks

power level

map/navigation widgets

antenna system

image

event handlers

medium

deleting name(s)

instant messaging

workout

sets

multiple simultaneous contacts

wireless network

Touch-Screen Virtual Keyboard

speed

tactile feedback generation instructions

characteristics

personal electronic device

external services

surrounding environment

voice/audio based platforms

respective application view

hit view determination module

server-side functionalities

data

touch movement

suitable electronic device

back

phrase

electronic address book

alarm clock widget

task

“Multi-Functional Hand-Held Device

“Operation

perceived change

predetermined process

“graphics”

velocity

user preference data

same position

gesture includes

other visual property

other elements

reduced-size device

client

supplemental information

sound

rotation

large range

suitable object or appendage

predefined event

Presence Service

physical actuator button

other applications

contact/motion module

map module

significant event

Mobile Communications

most circumstances

mobile phone

local Wi-Fi network

respective user device

TDMA

response

touch-based gesture

messages

external port

larger area

user-interface object

touch screen display

communication network(s)

following modules

distributed network

laptop or tablet computers

metadata

appended claims

event data

attachments

electromechanical devices

users

trademark

widgets

event delivery

digital assistant module

computer

speech input

networks

stylus

respective hit view

capacitive touch-sensitive surfaces

software components

searching

tactile output

portable devices

divisions

visual form

web pages

devices

additional description

electrical signal

e.g., speaker

conversation

accelerometer

trackpad or touch screen display

above

richer interactive experience

subset or superset

iPad.RTM

touch

separate module

flag

desktop computer

haptic feedback module

address book

AAC files

at least one contact intensity sensor

contextual information

e-mail client module

magnitude

U.S. patent application Ser

finger-up event

system

mouse

optical sensor

device/global internal state

portable multifunctional device

DC-HSPDA

ambient temperature

APIs

sub-event delivery instructions

calif

described sensory perception

abbreviated request

detected contacts

timings

sub-events

substitute

messaging platform

tactile outputs

user inputs

HTML

video module

BTLE

third-party cloud service providers

many sensory perceptions

conjunction

mini-applications

software parameters

peripherals interface

user interface

surface

Graphical User Interface

contrast

figure

VxWorks

eye movements

active event recognizer determination module

IM 241, browser

store

light emitting diode

library

corresponding code

other proximity sensor arrays

sensor

two or more parties

device temperature

various points

touch screens

complete request

battery power

system-level click “intensity” parameter

west gate

various sensors

calendar service

server-side portion

respective application

physical hardware

touch sub-events

object

GPS module

request information or performance

natural language processing models

power converter or inverter

only two user devices

user request

e-mails

other files

characters

single module

stated condition

speech output

text input module

JAVA-enabled applications

event recognizer’s

PDA and/or music player functions

user touches

power failure detection circuit

screen image data

scrolls

volume control

perform task execution

communication module

tactile output generator(s)

speech recognition models

event handler

other non-volatile solid-state memory devices

sub-event delivery

more detailed description

other event recognizers

application views or user interface windows

access

tactile forms

inferred user intent

environment

event handling

force or pressure

backend server

videos

service models

outputs

weighted average

other widgets

modules and data structures

without limitation

acceleration

user-interface objects

physical state

multifunctional devices

displacement

more or fewer components

my friends

CDMA

“virtual assistant

input interface

following description

video and music player module

position

instant message

IEEE 802.11ac

touch-sensitive area

touchpads

touch-sensitive display

workout sensor data

data content

visual output

optical sensor(s)

number and type

series

event sorter

trackpad or touch screen display hardware

computer-implemented methods

application view

time division

separate software programs

display

touch-sensitive display system

touch cancellation

area

haptic feedback controller

images or videos

iPhone.RTM

various sensors or combinations

event information

device orientation

protocol

Performing Gestures

touchpad

light

headset jack

steps and parameters

user

multiple modes

program

substitute measurements

U.S. Provisional Patent Application No. 60/936

hit test

SIMPLE

additional modules

power management system

directions

force measurements

respective view

light emitting polymer display

photos

memory management

controller

FIGS

combinations

event dispatcher module

limited battery power

attitude

computer-readable storage mediums

audio data

input/output peripherals

their entirety

cellular signals strength

predetermined intervals

FIREWIRE

other word processing applications

event definition

various regions

location-based services

such communications

event receiver

sensors

text input

external, connected display

wide area networks

touch begin

touch-sensitive surface proximate

DA server

intent

weather widget

Unlock Image

“Acceleration-based Theft Detection System

standalone application

touch-sensitive surface

event delivery system

embedded operating system

numerous other ways

at least a subset

three user-interface objects

event delivery instructions

WINDOWS

output responses

predefined process

purpose

darwin

e.g., image and text

individualized sensory perceptions

Short Message Service

iPod Touch.RTM

benefits

user interface kit

mass

various wired or wireless protocols

hardware and software

non-transitory computer-readable storage medium

graphics

procedures

lists

recharging system

instruction execution system

event comparator

portrait orientation

soft keys

location information

instructions)

structural changes

various approaches

flags

inquiry

touch-based gestures

other forms

following U.S. Pat

magnitude and direction

range

information service(s)

known network protocol

television

other physical input control device

system architecture

visual interface

ethernet

online video module

or touchpad

similar compact electronic device

coordinate data

view hierarchy

electromagnetic signals

interest

stand-alone module

Touch Screen User Interface

mutual capacitance

predetermined duration

process

vibrations

event handler(s)

proxy configuration

download and play

environments

various data

contact intensity information

event or sub-event

phone call

internet-based instant messages

video stream

to-do lists

accelerometer(s)

various described examples

unit area

physical buttons

individual intensity thresholds

images

plurality

ongoing touch-based gesture

other location-based data

verbal responses

Bluetooth Low Energy

subset

user input and determining user intent

signal line

associating telephone number(s)

user instructions

user-facing input and output processing

other touch-sensitive devices

communication

additional device functionality

physical keyboard

multi-touch gesture

mouse “click” threshold

second input

Wireless Fidelity

infrared port

participant

Touch Sensitive Input Devices

W-CDMA

greater detail

gestures

touch-sensitive display or trackpad

recognized event

menus

various software components

current user input

wireless local area network

cupertino

steps

“Multipoint Touch Surface Controller

“Mode-Based Graphical User Interfaces

root menu

palm

device housing

Apple Inc

instructions

lowest level view

memory

communication protocols

EDGE

Extensible Markup Language

various virtual devices and/or services

friends

pressure threshold

various models

knob

example

event definitions

groups

contact data

liftoff

lighting

proper inputs

physical address

greater communication capabilities

music

physical location

additional information

control devices

calorie

piezoelectric actuator

user interactions

application views

information processing system

error logs

infrastructure resources

Presence Leveraging Extensions

lock

delayed actions

actively involved views

particular sequence

subsequent sub-events

Digital Assistant Client Module

initial touch

Enhanced Messaging Service

store calendars

information being

electromagnetic signals and communicates

analysis

higher level object

contact force or pressure

push button

single or multiple keyboard presses

hierarchy

hit view

application-specific integrated circuits

predetermined phase

landscape orientation

30-pin connector

user access

form

notes

various operations

situation

at least one tactile output generator

calendars

particular physical actuators

behalf

broader range

display controller

event or sub event

e.g., text, audio, images, video, etc

RF circuitry

“DA client

birthday party

touch-sensitive displays

output processing functions

actions

additional input

Session Initiation Protocol

user interfaces

Augment Proximity Sensor Output”

device

shopping lists

following applications

other input

connection

“digital assistant

movements

presence or addition

Internet Protocol

terms

context

associated modules and/or sets

buttons

next week

primary input control device

“Automated Response

non-volatile memory

estimated force

server portions

specific requirements

“Methods And Apparatuses

e.g., microphone

names

workout support module

other display

reference

contacts

continuous dialogue

narrative

various applications

intensity thresholds

chip

accelerometer(s

push buttons

particular functions

specific examples

Hypertext Markup Language

peripheral devices

definitions

performing aspects

data structures

e-mail addresses

access services

hardware

object updater

electric force sensors

cases

attribute

limited real estate

active application state

name(s)

search

gesture

other user interfaces

“Gestures

mobile telephone

code division

headset

other components

Enhanced Data GSM Environment

point

hundreds

peripherals

multifunction device

varying levels

message environment

audio input/output peripherals

contact movements

functionality

Portable Devices”

maps

various examples

multiple access

JavaScript file

wideband code division

block diagram illustrating

management and distribution

multifunction devices

various output interfaces

intent deduction and/or fulfillment

natural language input

power

video conference module

stated features

deliver event information

exemplary embodiment

multiple views

speed and direction

landscape view

human-audible sound waves

digital slide show or album

IMAP

television set-top box

physical characteristics

description

modules

picture/video metadata

ambient noise

updater

natural language dialogue

movement

User Activity

statement

linear motion

server

sensor state

contents

potential event

digital assistant server

context data

camera

prior state or view

performs actions

vice versa

“touch screen”

component

workout sensors

other attachments

redo/undo queue

interactions

detection

method

local yellow page widgets

presentation applications

size

accompanying drawings

operating system

server system

definition

software

processing modules

select and play music

data updater

drivers

bluetooth

user data and models

LINUX

“menu button”

generating instructions

virtual or soft buttons

software procedures

calibrate sensors

detecting intensity

internet-based messages

ontology

only one example

requests

background services

second liftoff

various functions

statistical language models

slider switches

greater accessibility

weather widgets

estimated force or pressure

sequence

text

respective event handler

general system tasks

lens

video resolution

delegates

audio

state

suitable calendar invite

user touch

audio circuitry

subsystems

only the hit view

user-specified name pronunciations

digital assistant

new user-interface object or updates

VolP

situations

holds

chronological format

user interface state information

IEEE 802.11a

calculator widget

contact

XMPP

force

e-mail address

location and orientation

finger

detected contact

place

e.g., contacts

particular location

intensities

instance

DA server system

precise pointer/cursor position or command

telephone number

single chip

o subsystem

request

methods

wireless communication

current orientation

electronic devices

wired or wireless network

at least four distinct values and more

physical input control devices

multiple participants

communications standards

device’s

multiple touching

other communications devices

such embodiments

widget 249

yet other embodiments

views

illustration specific examples

local area networks

multi-touch sensitive touchpads

UNIX

direct communication connection

applications etc

user-created widget 249

provision

output

menu button

flash memory devices

programmatic levels

functions

specification

predetermined noise threshold

respective telephone number

current user interaction

user data

image management module

“roughness”

block diagram

presence

communications networks

audio output

past and present network activities

operations

Dual-Cell HSPA

video conferencing

specification and claims

U.S. Patent Publication No

messaging environment

interaction

gestural input

objects

“touch-sensitive display system

task completion or information acquisition

short concise communications

joystick

particular examples

controls

widget creator module

telephone module

software settings

output interface

Portable Electronic Devices

well-known circuitry

contact list

other tactile output

e.g., device

proximate

network(s)

client-facing input

optical sensor controller

“intelligent automated assistant

number

initiating sub-event occurs

event]”

storage device control

touch end

DA client

protocols

amount

single contacts

sensor information

goals

soft keyboards

laptop computer

U.S. Patent Publication 2002/0015024A

respective event receiver

memory controller

Global Positioning System

manage e-mail

button

motor

speakers

speaker

search module

watch

excess

underlying computing resources

operation

metropolitan area network

conversational interface

telephony-based messages

requested task

other global navigation system

question

digital images

HSUPA

input source

term

contact and/or changes

mouse movement and mouse button presses

couples

particular online video

various elements

programmed actions

contact area

intranet

capacitance

Global System

multifunctional device

other intensity sensors

label

microphone

liquid crystal display

proxy

sub-modules

I/O interface to external services

application’s

performance

rendering

icon

respective user-interface object

camera module

center

text, music, sound, image, video

combination

singular forms

inputs

input controller

widget modules

inherits methods

abbreviated requests

movement or breaking

application internal state

other embodiments

tactile output generator

view

finger-up

user input

intensity

proximity sensor

animations

implementations

physical click wheel

delete

workout data

e.g., video and music player module

portable multifunction device

icons

touch-sensitive touchpads

SIMPLE, or IMPS

noisy environments

scope

elements

RF signals

couple input and output peripherals

extension

RF transceiver

voice

first liftoff

taps

distinct values

others

intensity sensor controller

web page

optical force sensors

pressure-sensitive tip

digital signal processor

examples

residing

electroactive polymer

natural language command

processor-containing system

various known software components

IMPS

multiple exchanges

software state

Cascading Style Sheets

programs

input devices

video player module

instant messaging module

network

other application

such interpretations

portrait or landscape

“Proximity Detector

task flow

other image editing applications

video conference

audio files

music player module

various embodiments

biometric inputs

motion patterns

computer-based system

view determination module

DA clients

respective event

online videos

non-portable multifunctional device

various subsets

first touch

terminology

magnetometer

portrait view

above-identified modules and applications

my girlfriend’s

instant messages

resources usage

invoking programs

other information processing methods

computers

various hardware

dragging

energy

text messages

stocks widget

other video conference participants

evolution

quick press

data and models

access one or more telephone numbers

front

predefined threshold values

various software programs and/or sets

change

I/O subsystem

sound waves

electrical signals

user’s

resistance

light-emitting diode

tasks

video

other suitable communication protocol

presence protocol

alerts

near field communication

internet

Multimedia Message Service

more than one view

services

determines

screen

output processing

result

post office protocol

USB port

still and/or video image acquisition

that found

user devices

notes module

properties

only user-facing input

“Methods And Systems

headphone

multi-party conversation

finger contact

click”

surface acoustic wave technologies

wireless LAN

pressure information

link

responses

information

CODEC chipset

third-party service providers

telephone numbers

task flow models

previous actions

sports devices

workouts

communicatively couple

GUI updater

saturation

Ambient Light Sensor

previous interactions

smoothness

e mail

informational answer or performance

still or video images

name

physical displacement

functionalities

configurable properties

technologies

display state

device attitude

power status indicator

browse

two or more components

housing

satisfactory response

hand

various user interfaces

wider range

Universal Serial Bus

Touch Screen Interface

extensible messaging

none

requested informational answer

e.g., event recognizer

video image acquisition

pressure

associated listed items

convenience

navigation service(s)

device location

game console

at least one tactile output generator sensor

electrostatic actuator

event recognizers

event

module

other audio components

user-specific vocabulary data

screen display

current application view(s)

user-created widgets

westerman

activation thresholds

document

oral instructions

apparatus

generation

voice input

stores

user and device

respective instant message

embodiments

separate chips

subscriber identity module

first input

stated condition or event]

other devices

iPod.RTM

“Activating Virtual Keys

break

client-side functionalities

same touch

download

other functionalities

complementary metal-oxide semiconductor

functionality and capabilities

limited communication capabilities

visual impact

messaging service(s)

sub-event definitions

power system

audio forms

Portable Device

second touch

capacitive force sensors

touch input

tablet computer

video images

its entirety

contacts module

tuner

user intent

other examples

output-only headphones

comparison

video file

telephony-based instant messages

addition

voice recognition

contact intensity sensor

part

e.g., touch screen displays and/or touchpads

displayed object

generated tactile output

World Wide Web

single optical sensor

term “tactile output” refers

client-server model

stated condition or event],” depending

intensity threshold

sub-event

information and return relevant data

event recognizer

embodiments, device

user device

determines or updates

input controller(s)

finger-based contacts and gestures

Automatic Configuration

high-speed random access memory

physical push button

haptic and/or tactile contact

high-speed uplink packet access

processor(s)

store maps

client-facing I/O interface

event monitor

various components

previous position

other system

second user device

portable multifunction devices

piezoelectric force sensors

filing date

portions

up/down button

term “graphics”

audible (e.g., speech

finger-down event

tactile sensation

charge-coupled device

digital rights management

brightness

opposite touch screen display

client-side portion

conduct

term “intensity”

other sound files

input/output

MP3 player

long term evolution

limitation

keyboard

joysticks

multiple force sensors

distance

or memory

aforementioned applications

Handheld Device”

other graphic property data

magnitude and/or direction

communications

other points

encryption

other input control devices

location

browser module

radio frequency

voice replication

internet message access protocol

alternate embodiments

e-mail

CMOS

HSPA

calendar module

other part

high-speed downlink packet access

respective event recognizer

components

current location

physical/mechanical control

foregoing discussion regarding event handling

rocker buttons

still image or video

sequences

edit

large majority

support module

GLONASS