From e2a37bc1a67a751a6686318f83ce0088448f92c1 Mon Sep 17 00:00:00 2001
From: bmielnicki
Date: Wed, 7 Oct 2020 16:24:06 +0200
Subject: [PATCH 1/2] named parameters in up.sh; saving trajectories
---
.gitignore | 2 +
README.md | 21 +++---
docker-compose.yml | 6 +-
server/Dockerfile | 10 ++-
server/app.py | 5 +-
server/config.json | 1 +
server/game.py | 102 +++++++++++++++++++----------
server/static/templates/index.html | 7 +-
up.sh | 69 +++++++++++++++++--
9 files changed, 167 insertions(+), 56 deletions(-)
diff --git a/.gitignore b/.gitignore
index 6e46b7a..8fb0b64 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,3 +7,5 @@ node_modules/
**/master_agents/
**/__pycache__/
**/agents/
+**/trajectories/
+.env
diff --git a/README.md b/README.md
index 9bfe013..2d3580d 100644
--- a/README.md
+++ b/README.md
@@ -21,7 +21,7 @@ Building the server image requires [Docker](https://docs.docker.com/get-docker/)
The server can be deployed locally using the driver script included in the repo. To run the production server, use the command
```bash
-./up.sh production
+./up.sh --env production
```
In order to build and run the development server, which includes a deterministic scheduler and helpful debugging logs, run
@@ -40,28 +40,33 @@ In order to kill the production server, run
The Overcooked-Demo server relies on both the [overcooked-ai](https://github.com/HumanCompatibleAI/overcooked_ai) and [human-aware-rl](https://github.com/HumanCompatibleAI/human_aware_rl) repos. The former contains the game logic, the latter contains the rl training code required for managing agents. Both repos are automatically cloned and installed in the Docker builds.
-The branch of `overcooked_ai` and `human_aware_rl` imported in both the development and production servers can be specified by the `OVERCOOKED_BRANCH` and `HARL_BRANCH` environment variables, respectively. For example, to use the branch `foo` from `overcooked-ai` and branch `bar` from `human_aware_rl`, run
+The branch of `overcooked_ai` and `human_aware_rl` imported in both the development and production servers can be specified by the `--overcooked-branch` and `--harl-branch` parameters, respectively. For example, to use the branch `foo` from `overcooked-ai` and branch `bar` from `human_aware_rl`, run
```bash
-OVERCOOKED_BRANCH=foo HARL_BRANCH=bar ./up.sh
+./up.sh --overcooked-branch foo --harl-branch bar
```
The default branch for both repos is currently `master`.
## Using Pre-trained Agents
-Overcooked-Demo can dynamically load pre-trained agents provided by the user. In order to use a pre-trained agent, a pickle file should be added to the `agents` directory. The final structure will look like `static/assets/agents//agent.pickle`. Note, to use the pre-defined rllib loading routine, the agent directory name must start with 'rllib', and contain the appropriate rllib checkpoint, config, and metadata files. For more detailed info and instructions see the [RllibDummy_CrampedRoom](server/static/assets/agents/RllibDummy_CrampedRoom/) example agent.
+Overcooked-Demo can dynamically load pre-trained agents provided by the user. In order to use a pre-trained agent, a pickle file should be added to the `agents` directory. The final structure will look like `static/assets/agents//agent.pickle`. You can also specify other agent directory by using `--agents-dir` parameter.
+Note, to use the pre-defined rllib loading routine, the agent directory name must start with 'rllib', and contain the appropriate rllib checkpoint, config, and metadata files. For more detailed info and instructions see the [RllibDummy_CrampedRoom](server/static/assets/agents/RllibDummy_CrampedRoom/) example agent.
If a more complex or custom loading routing is necessary, one can subclass the `OvercookedGame` class and override the `get_policy` method, as done in [DummyOvercookedGame](server/game.py#L420). Make sure the subclass is properly imported [here](server/app.py#L5)
+## Saving trajectories
+Trajectories from games run in overcooked-demo can be saved. By using `--trajectories-dir` you can specify directory that will be used to save trajectories. By default trajectories are saved inside `static/assets/trajectories` directory.
+
+
## Updating Overcooked_ai
-This repo was designed to be as flexible to changes in overcooked_ai as possible. To change the branch used, use the `OVERCOOKED_BRANCH` environment variable shown above.
+This repo was designed to be as flexible to changes in overcooked_ai as possible. To change the branch used, use the `--overcooked-branch` parameter shown above.
Changes to the JSON state representation of the game will require updating the JS graphics. At the highest level, a graphics implementation must implement the functions `graphics_start`, called at the start of each game, `graphics_end`, called at the end of each game, and `drawState`, called at every timestep tick. See [dummy_graphcis.js](server/graphics/dummy_graphics.js) for a barebones example.
-The graphics file is dynamically loaded into the docker container and served to the client. Which file is loaded is determined by the `GRAPHICS` environment variable. For example, to server `dummy_graphics.js` one would run
+The graphics file is dynamically loaded into the docker container and served to the client. Which file is loaded is determined by the `--graphics` parameter. For example, to server `dummy_graphics.js` one would run
```bash
-GRAPHICS=dummy_graphics.js ./up.sh
+./up.sh --graphics dummy_graphics.js
```
-The default graphics file is currently `overcooked_graphics_v2.1.js`
+The default graphics file is currently `overcooked_graphics_v2.2.js`
## Configuration
diff --git a/docker-compose.yml b/docker-compose.yml
index 0939c0e..fb0d8e7 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -2,6 +2,8 @@ version : '3.7'
services:
app:
+ env_file:
+ - .env
build:
context: ./server
args:
@@ -13,4 +15,6 @@ services:
FLASK_ENV: "${BUILD_ENV:-production}"
ports:
- "80:5000"
-
+ volumes:
+ - "${AGENTS_DIR:-./server/static/assets/agents}:/app/static/assets/agents"
+ - "${TRAJECTORIES_DIR:-./server/static/assets/trajectories}:/app/static/assets/trajectories"
diff --git a/server/Dockerfile b/server/Dockerfile
index fcd67d3..a225bc0 100644
--- a/server/Dockerfile
+++ b/server/Dockerfile
@@ -1,10 +1,4 @@
FROM python:3.7-stretch
-
-ARG BUILD_ENV
-ARG OVERCOOKED_BRANCH
-ARG HARL_BRANCH
-ARG GRAPHICS
-
WORKDIR /app
# Install non-chai dependencies
@@ -12,10 +6,13 @@ COPY ./requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
# Install eventlet production server if production build
+ARG BUILD_ENV
RUN if [ "$BUILD_ENV" = "production" ] ; then pip install eventlet ; fi
# Clone chai code
+ARG OVERCOOKED_BRANCH
RUN git clone https://github.com/HumanCompatibleAI/overcooked_ai.git --branch $OVERCOOKED_BRANCH --single-branch /overcooked_ai
+ARG HARL_BRANCH
RUN git clone https://github.com/HumanCompatibleAI/human_aware_rl.git --branch $HARL_BRANCH --single-branch /human_aware_rl
# Dummy data_dir so things don't break
@@ -31,6 +28,7 @@ RUN apt-get install -y libgl1-mesa-dev
# Copy over remaining files
COPY ./static ./static
COPY ./*.py ./
+ARG GRAPHICS
COPY ./graphics/$GRAPHICS ./static/js/graphics.js
COPY ./config.json ./config.json
diff --git a/server/app.py b/server/app.py
index 89766a7..252519f 100644
--- a/server/app.py
+++ b/server/app.py
@@ -14,7 +14,6 @@
from game import OvercookedGame, OvercookedTutorial, Game, OvercookedPsiturk
import game
-
### Thoughts -- where I'll log potential issues/ideas as they come up
# Should make game driver code more error robust -- if overcooked randomlly errors we should catch it and report it to user
# Right now, if one user 'join's before other user's 'join' finishes, they won't end up in same game
@@ -45,6 +44,8 @@
# Path to where pre-trained agents will be stored on server
AGENT_DIR = CONFIG['AGENT_DIR']
+TRAJECTORIES_DIR = CONFIG["TRAJECTORIES_DIR"]
+
# Maximum number of games that can run concurrently. Contrained by available memory and CPU
MAX_GAMES = CONFIG['MAX_GAMES']
@@ -91,7 +92,7 @@
"psiturk" : OvercookedPsiturk
}
-game._configure(MAX_GAME_LENGTH, AGENT_DIR)
+game._configure(MAX_GAME_LENGTH, AGENT_DIR, TRAJECTORIES_DIR)
diff --git a/server/config.json b/server/config.json
index b34bbd7..0695d49 100644
--- a/server/config.json
+++ b/server/config.json
@@ -4,6 +4,7 @@
"MAX_GAMES" : 10,
"MAX_GAME_LENGTH" : 120,
"AGENT_DIR" : "./static/assets/agents",
+ "TRAJECTORIES_DIR": "./static/assets/trajectories",
"MAX_FPS" : 30,
"psiturk" : {
"experimentParams" : {
diff --git a/server/game.py b/server/game.py
index c052a8d..6cc910b 100644
--- a/server/game.py
+++ b/server/game.py
@@ -6,20 +6,24 @@
from overcooked_ai_py.mdp.overcooked_env import OvercookedEnv
from overcooked_ai_py.mdp.actions import Action, Direction
from overcooked_ai_py.planning.planners import MotionPlanner, NO_COUNTERS_PARAMS
+from overcooked_ai_py.agents.benchmarking import AgentEvaluator
from human_aware_rl.rllib.rllib import load_agent
import random, os, pickle, json
import ray
+import numpy as np
# Relative path to where all static pre-trained agents are stored on server
AGENT_DIR = None
+TRAJECTORIES_DIR = None
# Maximum allowable game time (in seconds)
MAX_GAME_TIME = None
-def _configure(max_game_time, agent_dir):
- global AGENT_DIR, MAX_GAME_TIME
+def _configure(max_game_time, agent_dir, trajectories_dir):
+ global AGENT_DIR, MAX_GAME_TIME, TRAJECTORIES_DIR
MAX_GAME_TIME = max_game_time
AGENT_DIR = agent_dir
+ TRAJECTORIES_DIR = trajectories_dir
class Game(ABC):
@@ -381,13 +385,13 @@ class OvercookedGame(Game):
- _curr_game_over: Determines whether the game on the current mdp has ended
"""
- def __init__(self, layouts=["cramped_room"], mdp_params={}, num_players=2, gameTime=30, playerZero='human', playerOne='human', showPotential=False, randomized=False, **kwargs):
+ def __init__(self, layouts=["cramped_room"], mdp_params={}, num_players=2, gameTime=30, playerZero='human', playerOne='human', showPotential=False, randomized=False, saveTrajectory=False, **kwargs):
super(OvercookedGame, self).__init__(**kwargs)
self.show_potential = showPotential
self.mdp_params = mdp_params
self.layouts = layouts
self.max_players = int(num_players)
- self.mdp = None
+ self.env = None
self.mp = None
self.score = 0
self.phi = 0
@@ -406,7 +410,7 @@ def __init__(self, layouts=["cramped_room"], mdp_params={}, num_players=2, gameT
self.curr_tick = 0
self.human_players = set()
self.npc_players = set()
-
+ self.save_trajectory = bool(saveTrajectory)
if randomized:
random.shuffle(self.layouts)
@@ -421,7 +425,15 @@ def __init__(self, layouts=["cramped_room"], mdp_params={}, num_players=2, gameT
self.add_player(player_one_id, idx=1, buff_size=1, is_human=False)
self.npc_policies[player_one_id] = self.get_policy(playerOne, idx=1)
self.npc_state_queues[player_one_id] = LifoQueue()
-
+ self.trajectory = []
+
+ @property
+ def mdp(self):
+ return self.env.mdp
+
+ @property
+ def state(self):
+ return self.env.state
def _curr_game_over(self):
return time() - self.start_time >= self.max_time
@@ -479,7 +491,41 @@ def is_ready(self):
def apply_action(self, player_id, action):
pass
- def apply_actions(self):
+ def _log_trajectory_step(self, state, joint_action, reward, done, info):
+ self.trajectory.append((state, tuple(joint_action), reward, done, info))
+
+ def _get_trajectory_dict(self):
+ trajectories = { k:[] for k in self.env.DEFAULT_TRAJ_KEYS }
+ trajectory = np.array(self.trajectory)
+ obs, actions, rews, dones, infos = trajectory.T[0], trajectory.T[1], trajectory.T[2], trajectory.T[3], trajectory.T[4]
+ trajectories["ep_states"].append(obs)
+ trajectories["ep_actions"].append(actions)
+ trajectories["ep_rewards"].append(rews)
+ trajectories["ep_dones"].append(dones)
+ trajectories["ep_infos"].append(infos)
+ trajectories["ep_returns"].append(self.score)
+ trajectories["ep_lengths"].append(self.env.state.timestep)
+ trajectories["mdp_params"].append(self.env.mdp.mdp_params)
+ trajectories["env_params"].append(self.env.env_params)
+ trajectories["metadatas"].append({})
+ trajectories = {k: np.array(v) for k, v in trajectories.items()}
+
+ AgentEvaluator.check_trajectories(trajectories)
+ return trajectories
+
+ def _create_trajectory_filename(self):
+ milis_timestamp = str(int(time()*1000))
+ return "trajectory-"+milis_timestamp + ".json"
+
+ def get_data(self):
+ if self.save_trajectory:
+ file_path = os.path.join(TRAJECTORIES_DIR, self._create_trajectory_filename())
+ traj_dict = self._get_trajectory_dict()
+ AgentEvaluator.save_traj_as_json(traj_dict, file_path)
+ self.trajectory = []
+ return super(OvercookedGame, self).get_data()
+
+ def apply_actions(self, log_trajectory=True):
# Default joint action, as NPC policies and clients probably don't enqueue actions fast
# enough to produce one at every tick
joint_action = [Action.STAY] * len(self.players)
@@ -490,10 +536,12 @@ def apply_actions(self):
joint_action[i] = self.pending_actions[i].get(block=False)
except Empty:
pass
-
- # Apply overcooked game logic to get state transition
- prev_state = self.state
- self.state, info = self.mdp.get_state_transition(prev_state, joint_action)
+
+ prev_state = self.env.state
+ new_state, reward, done, info = self.env.step(joint_action)
+ if log_trajectory:
+ self._log_trajectory_step(prev_state, joint_action, reward, done, info)
+
if self.show_potential:
self.phi = self.mdp.potential_function(prev_state, self.mp, gamma=0.99)
@@ -503,24 +551,22 @@ def apply_actions(self):
self.npc_state_queues[npc_id].put(self.state, block=False)
# Update score based on soup deliveries that might have occured
- curr_reward = sum(info['sparse_reward_by_agent'])
+ curr_reward = sum(info['sparse_r_by_agent'])
self.score += curr_reward
- # Return about the current transition
return prev_state, joint_action, info
-
def enqueue_action(self, player_id, action):
overcooked_action = self.action_to_overcooked_action[action]
super(OvercookedGame, self).enqueue_action(player_id, overcooked_action)
def reset(self):
+ self.env.reset()
status = super(OvercookedGame, self).reset()
if status == self.Status.RESET:
# Hacky way of making sure game timer doesn't "start" until after reset timeout has passed
self.start_time += self.reset_timeout / 1000
-
def tick(self):
self.curr_tick += 1
return super(OvercookedGame, self).tick()
@@ -533,10 +579,11 @@ def activate(self):
raise ValueError("Inconsistent State")
self.curr_layout = self.layouts.pop()
- self.mdp = OvercookedGridworld.from_layout_name(self.curr_layout, **self.mdp_params)
+ mdp = OvercookedGridworld.from_layout_name(self.curr_layout, **self.mdp_params)
+ self.env = OvercookedEnv.from_mdp(mdp)
if self.show_potential:
self.mp = MotionPlanner.from_pickle_or_compute(self.mdp, counter_goals=NO_COUNTERS_PARAMS)
- self.state = self.mdp.get_standard_start_state()
+
if self.show_potential:
self.phi = self.mdp.potential_function(self.state, self.mp, gamma=0.99)
self.start_time = time()
@@ -559,11 +606,9 @@ def deactivate(self):
# Wait for all background threads to exit
for t in self.threads:
t.join()
-
# Clear all action queues
self.clear_pending_actions()
-
def get_state(self):
state_dict = {}
state_dict['potential'] = self.phi if self.show_potential else None
@@ -617,9 +662,8 @@ class OvercookedPsiturk(OvercookedGame):
"""
def __init__(self, *args, psiturk_uid='-1', **kwargs):
- super(OvercookedPsiturk, self).__init__(*args, showPotential=False, **kwargs)
+ super(OvercookedPsiturk, self).__init__(*args, showPotential=False, saveTrajectory=False, **kwargs)
self.psiturk_uid = psiturk_uid
- self.trajectory = []
def activate(self):
"""
@@ -628,17 +672,10 @@ def activate(self):
super(OvercookedPsiturk, self).activate()
self.trial_id = self.psiturk_uid + str(self.start_time)
- def apply_actions(self):
- """
- Applies pending actions then logs transition data
- """
- # Apply MDP logic
- prev_state, joint_action, info = super(OvercookedPsiturk, self).apply_actions()
-
- # Log data to send to psiturk client
- curr_reward = sum(info['sparse_reward_by_agent'])
+ def _log_trajectory_step(self, state, joint_action, reward, done, info):
+ curr_reward = sum(info['sparse_r_by_agent'])
transition = {
- "state" : json.dumps(prev_state.to_dict()),
+ "state" : json.dumps(state.to_dict()),
"joint_action" : json.dumps(joint_action),
"reward" : curr_reward,
"time_left" : max(self.max_time - (time() - self.start_time), 0),
@@ -653,7 +690,6 @@ def apply_actions(self):
"player_0_is_human" : self.players[0] in self.human_players,
"player_1_is_human" : self.players[1] in self.human_players
}
-
self.trajectory.append(transition)
def get_data(self):
@@ -677,7 +713,7 @@ class OvercookedTutorial(OvercookedGame):
def __init__(self, layouts=["tutorial_0"], mdp_params={}, playerZero='human', playerOne='AI', phaseTwoScore=15, **kwargs):
- super(OvercookedTutorial, self).__init__(layouts=layouts, mdp_params=mdp_params, playerZero=playerZero, playerOne=playerOne, showPotential=False, **kwargs)
+ super(OvercookedTutorial, self).__init__(layouts=layouts, mdp_params=mdp_params, playerZero=playerZero, playerOne=playerOne, showPotential=False, saveTrajectory=False, **kwargs)
self.phase_two_score = phaseTwoScore
self.phase_two_finished = False
self.max_time = 0
diff --git a/server/static/templates/index.html b/server/static/templates/index.html
index 80219e7..ee5a21c 100644
--- a/server/static/templates/index.html
+++ b/server/static/templates/index.html
@@ -59,11 +59,14 @@
-
+
-
+
+
+
+
diff --git a/up.sh b/up.sh
index 32aba73..baaef8a 100755
--- a/up.sh
+++ b/up.sh
@@ -1,14 +1,75 @@
-if [[ $1 = prod* ]];
+# default arg values
+BUILD_ENV="development"
+
+# saved as .env file for docker-compose
+ENV_FILE=""
+
+# parse kwargs
+# for this and other ways check out https://stackoverflow.com/questions/192249/how-do-i-parse-command-line-arguments-in-bash
+while [[ $# -gt 0 ]]
+do
+key="$1"
+case $key in
+ --build-env|--env)
+ if [[ "$2" = prod* ]];
+ then
+ BUILD_ENV="production"
+ fi
+ if [[ "$2" = dev* ]];
+ then
+ BUILD_ENV="development"
+ fi
+ shift # past argument
+ shift # past value
+ ;;
+ --branch|--overcooked-branch)
+ ENV_FILE+="OVERCOOKED_BRANCH=$2
+"
+ shift # past argument
+ shift # past value
+ ;;
+ --harl-branch)
+ ENV_FILE+="HARL_BRANCH=$2
+"
+ shift # past argument
+ shift # past value
+ ;;
+ --graphics)
+ ENV_FILE+="GRAPHICS=$2
+"
+ shift # past argument
+ shift # past value
+ ;;
+ --agents-dir)
+ ENV_FILE+="AGENTS_DIR=$2
+"
+ shift # past argument
+ shift # past value
+ ;;
+ --trajectories-dir|--trajs-dir)
+ ENV_FILE+="TRAJECTORIES_DIR=$2
+"
+ shift # past argument
+ shift # past value
+ ;;
+ *) # unknown option
+ shift # past argument
+;;
+esac
+done
+
+ENV_FILE+="BUILD_ENV=$BUILD_ENV"
+echo "$ENV_FILE" > .env
+
+
+if [[ "$BUILD_ENV" = "production" ]] ;
then
echo "production"
- export BUILD_ENV=production
-
# Completely re-build all images from scatch without using build cache
docker-compose build --no-cache
docker-compose up --force-recreate -d
else
echo "development"
- export BUILD_ENV=development
# Uncomment the following line if there has been an updated to overcooked-ai code
# docker-compose build --no-cache
From c4b82b6997b1d82b3ce98e39adfaf311c8f724fb Mon Sep 17 00:00:00 2001
From: bmielnicki
Date: Tue, 13 Oct 2020 18:55:00 +0200
Subject: [PATCH 2/2] add trajectory replay; add custom filenames for saved
trajectories
---
README.md | 6 +-
server/app.py | 24 +
server/game.py | 16 +-
server/static/css/jquery-ui.css | 1311 +++++++++++++++++++++++++++
server/static/js/index.js | 14 +
server/static/js/replay.js | 195 ++++
server/static/lib/jquery-min.js | 6 -
server/static/lib/jquery-ui.min.js | 13 +
server/static/lib/jquery.min.js | 2 +
server/static/templates/index.html | 157 ++--
server/static/templates/replay.html | 88 ++
11 files changed, 1753 insertions(+), 79 deletions(-)
create mode 100644 server/static/css/jquery-ui.css
create mode 100644 server/static/js/replay.js
delete mode 100644 server/static/lib/jquery-min.js
create mode 100644 server/static/lib/jquery-ui.min.js
create mode 100644 server/static/lib/jquery.min.js
create mode 100644 server/static/templates/replay.html
diff --git a/README.md b/README.md
index 2d3580d..ba78f87 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
a",n=d.getElementsByTagName("*")||[],r=d.getElementsByTagName("a")[0],!r||!r.style||!n.length)return t;s=a.createElement("select"),u=s.appendChild(a.createElement("option")),o=d.getElementsByTagName("input")[0],r.style.cssText="top:1px;float:left;opacity:.5",t.getSetAttribute="t"!==d.className,t.leadingWhitespace=3===d.firstChild.nodeType,t.tbody=!d.getElementsByTagName("tbody").length,t.htmlSerialize=!!d.getElementsByTagName("link").length,t.style=/top/.test(r.getAttribute("style")),t.hrefNormalized="/a"===r.getAttribute("href"),t.opacity=/^0.5/.test(r.style.opacity),t.cssFloat=!!r.style.cssFloat,t.checkOn=!!o.value,t.optSelected=u.selected,t.enctype=!!a.createElement("form").enctype,t.html5Clone="<:nav>"!==a.createElement("nav").cloneNode(!0).outerHTML,t.inlineBlockNeedsLayout=!1,t.shrinkWrapBlocks=!1,t.pixelPosition=!1,t.deleteExpando=!0,t.noCloneEvent=!0,t.reliableMarginRight=!0,t.boxSizingReliable=!0,o.checked=!0,t.noCloneChecked=o.cloneNode(!0).checked,s.disabled=!0,t.optDisabled=!u.disabled;try{delete d.test}catch(h){t.deleteExpando=!1}o=a.createElement("input"),o.setAttribute("value",""),t.input=""===o.getAttribute("value"),o.value="t",o.setAttribute("type","radio"),t.radioValue="t"===o.value,o.setAttribute("checked","t"),o.setAttribute("name","t"),l=a.createDocumentFragment(),l.appendChild(o),t.appendChecked=o.checked,t.checkClone=l.cloneNode(!0).cloneNode(!0).lastChild.checked,d.attachEvent&&(d.attachEvent("onclick",function(){t.noCloneEvent=!1}),d.cloneNode(!0).click());for(f in{submit:!0,change:!0,focusin:!0})d.setAttribute(c="on"+f,"t"),t[f+"Bubbles"]=c in e||d.attributes[c].expando===!1;d.style.backgroundClip="content-box",d.cloneNode(!0).style.backgroundClip="",t.clearCloneStyle="content-box"===d.style.backgroundClip;for(f in x(t))break;return t.ownLast="0"!==f,x(function(){var n,r,o,s="padding:0;margin:0;border:0;display:block;box-sizing:content-box;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;",l=a.getElementsByTagName("body")[0];l&&(n=a.createElement("div"),n.style.cssText="border:0;width:0;height:0;position:absolute;top:0;left:-9999px;margin-top:1px",l.appendChild(n).appendChild(d),d.innerHTML="
").text(i.label)).appendTo(e)},_move:function(t,e){return this.menu.element.is(":visible")?this.menu.isFirstItem()&&/^previous/.test(t)||this.menu.isLastItem()&&/^next/.test(t)?(this.isMultiLine||this._value(this.term),this.menu.blur(),void 0):(this.menu[t](e),void 0):(this.search(null,e),void 0)},widget:function(){return this.menu.element},_value:function(){return this.valueMethod.apply(this.element,arguments)},_keyEvent:function(t,e){(!this.isMultiLine||this.menu.element.is(":visible"))&&(this._move(t,e),e.preventDefault())},_isContentEditable:function(t){if(!t.length)return!1;var e=t.prop("contentEditable");return"inherit"===e?this._isContentEditable(t.parent()):"true"===e}}),t.extend(t.ui.autocomplete,{escapeRegex:function(t){return t.replace(/[\-\[\]{}()*+?.,\\\^$|#\s]/g,"\\$&")},filter:function(e,i){var s=RegExp(t.ui.autocomplete.escapeRegex(i),"i");return t.grep(e,function(t){return s.test(t.label||t.value||t)})}}),t.widget("ui.autocomplete",t.ui.autocomplete,{options:{messages:{noResults:"No search results.",results:function(t){return t+(t>1?" results are":" result is")+" available, use up and down arrow keys to navigate."}}},__response:function(e){var i;this._superApply(arguments),this.options.disabled||this.cancelSearch||(i=e&&e.length?this.options.messages.results(e.length):this.options.messages.noResults,this.liveRegion.children().hide(),t("