How to Add a Secure JavaScript Execution Tool to Microsoft Agent Framework

There is a recurring moment in agent design where a team realizes the model does not just need to reason. It needs to compute. It needs to transform JSON, run a formula, post-process extracted fields, normalize dates, build a dynamic object, or apply domain logic that is simply easier to express in JavaScript than in prompt text.

That is where most teams make a dangerous move. They reach for evalFunction, or Node’s vm module and tell themselves it is “sandboxed enough.”

It is not.

Node’s own documentation is explicit that node:vm is not a security mechanism and should not be used to run untrusted code. Worker threads are also not the right boundary for hostile code because they are designed for parallelism and can share memory. At the same time, Microsoft Agent Framework is built to let agents call external tools through function tools, so the clean pattern is not “run JavaScript inside the agent host.” The clean pattern is “make JavaScript execution a remote tool with a hardened execution boundary.” (Node.js)

That is the architecture this post covers:

  • Microsoft Agent Framework in .NET
  • A custom function tool exposed to the agent
  • A tRPC call from the tool to a separate Node.js execution service
  • Execution inside a locked-down isolate, not vm
  • Explicit whitelisting of namespaces and packages
  • Validation, time limits, memory limits, and auditable policy controls

The key design principle is simple: treat JavaScript execution as a privileged capability, not a convenience API.

The architecture

At a high level, the flow looks like this:

  1. The agent decides it needs computation.
  2. Microsoft Agent Framework calls a function tool.
  3. The function tool sends a request over HTTP to a tRPC endpoint in Node.js.
  4. The Node service validates the request with Zod.
  5. The Node service creates an isolated execution environment.
  6. Only approved globals and wrapped package facades are injected.
  7. The code runs with strict limits for time, memory, and output shape.
  8. The result is returned to the agent as tool output.

Microsoft Agent Framework supports function tools as first-class extensions to an agent, and tRPC gives you a type-safe RPC layer with input and output validation. That combination is ideal here because the .NET side stays thin and deterministic, while the execution policy lives in one place on the Node side. (Microsoft Learn)

First principle: “secure eval” is really “isolated execution”

It is important to be direct here. There is no magic secureEval() in Node.js. If you are executing model-authored or user-authored JavaScript, the safest practical pattern is:

  • out-of-process execution boundary
  • fresh isolate per run or per tenant pool
  • no ambient filesystem or network access
  • no raw require
  • whitelisted host-provided capabilities only
  • timeouts, memory ceilings, and payload size limits
  • container and OS-level restrictions around the service

Why not use node:vm? Because the Node docs explicitly say not to use it as a security boundary. Why not just use worker threads? Because workers are concurrency primitives, not isolation primitives. A better starting point for JavaScript isolation in Node is isolated-vm, which exposes V8 isolates and is designed for running code in fresh environments with no default Node runtime capabilities. Node’s permission model can also further restrict the Node process itself. (Node.js)

The important nuance is this: even isolated-vm should be one layer, not the only layer. The strongest production posture is to run the execution service in its own locked-down container or workload boundary and assume defense in depth.

Tool contract design

Do not let the model send arbitrary source code and a free-form module list with no governance. Give it a constrained contract.

A good request shape looks like this:

import { z } from "zod";
export const ExecuteJsInput = z.object({
code: z.string().max(10_000),
input: z.unknown().optional(),
allowedNamespaces: z.array(z.string()).default([]),
allowedPackages: z.array(z.string()).default([]),
expectedResultSchema: z
.object({
type: z.enum(["json", "string", "number", "boolean", "array", "object"]),
})
.optional(),
timeoutMs: z.number().int().min(50).max(3000).default(1000),
});

This matters for two reasons.

First, tRPC is designed around typed procedures, and Zod-driven validation makes the boundary explicit. Second, you now have a place to enforce policy before any code gets near an isolate. (trpc.io)

The Microsoft Agent Framework side

On the .NET side, the tool should be boring. That is the goal.

Microsoft Agent Framework lets you expose custom logic through function tools, including by creating an AIFunction from a C# method. The agent does not need to know how tRPC works. It just needs a tool description that makes the capability understandable to the model. (Microsoft Learn)

A simplified example:

using System.ComponentModel;
using System.Net.Http.Json;
public class JavaScriptExecutionTool
{
private readonly HttpClient _httpClient;
public JavaScriptExecutionTool(HttpClient httpClient)
{
_httpClient = httpClient;
}
[Description("Executes tightly sandboxed JavaScript for deterministic data transformation and calculation.")]
public async Task<string> ExecuteSandboxedJavaScript(
[Description("The JavaScript source to execute. Must return a serializable result.")] string code,
[Description("Optional JSON input payload for the script.")] string? inputJson = null,
[Description("Approved namespaces the script may access.")] string[]? allowedNamespaces = null,
[Description("Approved package facades the script may access.")] string[]? allowedPackages = null)
{
var request = new
{
code,
input = string.IsNullOrWhiteSpace(inputJson) ? null : System.Text.Json.JsonSerializer.Deserialize<object>(inputJson),
allowedNamespaces = allowedNamespaces ?? Array.Empty<string>(),
allowedPackages = allowedPackages ?? Array.Empty<string>(),
timeoutMs = 1000
};
var response = await _httpClient.PostAsJsonAsync("/trpc/js.execute", request);
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
}

Then you register it as a function tool with your agent. The architectural point is more important than the exact setup syntax: the agent host never evaluates code locally. It delegates execution to the hardened service. (Microsoft Learn)

The tRPC boundary

tRPC is a strong fit because it gives you typed procedures, validation, and a clean contract between the .NET caller and Node service. Even though .NET is not consuming generated TypeScript types directly, the Node service still benefits from strict schemas and a maintainable procedure surface. (trpc.io)

Example router:

import { initTRPC } from "@trpc/server";
import { z } from "zod";
import { ExecuteJsInput } from "./schemas";
import { runSandboxedScript } from "./sandbox";
const t = initTRPC.create();
export const appRouter = t.router({
js: t.router({
execute: t.procedure
.input(ExecuteJsInput)
.mutation(async ({ input, ctx }) => {
return await runSandboxedScript(input, ctx.policyStore);
}),
}),
});
export type AppRouter = typeof appRouter;

This is where you can also add authentication, tenant context, rate limiting, audit metadata, and policy lookup.

The secure execution service

This is the heart of the design.

The mistake many teams make is trying to whitelist modules by exposing require. Do not do that. If you expose require, you are recreating Node inside the sandbox and dramatically expanding the attack surface.

Instead, preload and wrap approved capabilities in the host, then inject only those facades into the isolate.

That means your whitelist is not “the sandbox may import lodash.” It is “the sandbox may access a safe facade called packages.lodash that exposes only getpick, and omit.”

That is a much better boundary.

Example policy registry

type NamespaceFactory = () => Record<string, unknown>;
type PackageFactory = () => Record<string, unknown>;
const namespaceRegistry: Record<string, NamespaceFactory> = {
math: () => ({
round: Math.round,
floor: Math.floor,
ceil: Math.ceil,
max: Math.max,
min: Math.min,
}),
dates: () => ({
nowIso: () => new Date().toISOString(),
}),
};
const packageRegistry: Record<string, PackageFactory> = {
lodash: () => {
const { get, pick, omit } = require("lodash");
return { get, pick, omit };
},
decimal: () => {
const Decimal = require("decimal.js");
return { Decimal };
},
};

Notice what is missing: no arbitrary imports, no filesystem, no fetch, no process access, no environment access.

Example isolate runner

import ivm from "isolated-vm";
import { ExecuteJsInput } from "./schemas";
export async function runSandboxedScript(
request: z.infer<typeof ExecuteJsInput>,
policyStore: PolicyStore
) {
const policy = await policyStore.resolve({
namespaces: request.allowedNamespaces,
packages: request.allowedPackages,
});
const isolate = new ivm.Isolate({ memoryLimit: 64 });
const context = await isolate.createContext();
const jail = context.global;
await jail.set("global", jail.derefInto());
const safeNamespaces = Object.fromEntries(
policy.namespaces.map((name) => [name, namespaceRegistry[name]!()])
);
const safePackages = Object.fromEntries(
policy.packages.map((name) => [name, packageRegistry[name]!()])
);
await jail.set("input", new ivm.ExternalCopy(request.input ?? null).copyInto());
await jail.set("namespaces", new ivm.ExternalCopy(safeNamespaces).copyInto());
await jail.set("packages", new ivm.ExternalCopy(safePackages).copyInto());
const wrapped = `
"use strict";
(async function () {
const console = undefined;
const process = undefined;
const require = undefined;
const module = undefined;
const exports = undefined;
const Buffer = undefined;
const setTimeout = undefined;
const setInterval = undefined;
const userFn = async ({ input, namespaces, packages }) => {
${request.code}
};
return await userFn({ input, namespaces, packages });
})()
`;
const script = await isolate.compileScript(wrapped);
try {
const result = await script.run(context, { timeout: request.timeoutMs });
const copied = await new ivm.Reference(result).copy();
return {
ok: true,
result: copied,
};
} catch (error) {
return {
ok: false,
error: sanitizeError(error),
};
} finally {
isolate.dispose();
}
}

This is intentionally opinionated.

  • The sandbox gets input
  • The sandbox gets namespaces
  • The sandbox gets packages
  • The sandbox does not get Node
  • The sandbox does not get require
  • The sandbox does not get the environment

That is the right posture.

The isolated-vm project describes these isolates as separate JavaScript environments free of the extra capabilities that Node normally exposes. That is why it is a better primitive here than vm. (GitHub)

How whitelisting should really work

A lot of teams hear “whitelist packages” and think they should allow date-fns or lodash directly. That is still too coarse.

You want three policy levels.

1. Namespace whitelist

These are internal capability groups you define, such as:

  • math
  • dates
  • currency
  • tax
  • normalizers

These are ideal for domain logic because they let you present stable semantic surfaces to the model.

2. Package facade whitelist

This is not raw NPM package access. It is a curated wrapper over a package.

Example:

const packageRegistry = {
dateFns: () => {
const { addDays, formatISO, parseISO } = require("date-fns");
return { addDays, formatISO, parseISO };
},
};

3. Tenant or tool policy whitelist

Even if a package exists in the registry, a given agent or tenant may not be allowed to use it.

That means final access should be the intersection of:

  • globally supported capabilities
  • tenant policy
  • current agent policy
  • current tool invocation request

That keeps the model from escalating its own power simply by naming more packages.

What “most secure method” means in practice

Here is the honest version.

If the code is untrusted, the strongest production pattern is not “just use a safer JavaScript library.” The strongest pattern is:

  • dedicated Node execution service
  • running in a separate process or container from the agent host
  • Node permission model enabled where possible
  • no filesystem permission unless explicitly required
  • no network permission unless explicitly required
  • no child process permission
  • no raw module loading
  • isolate-based execution inside the service
  • per-request timeout
  • per-request memory cap
  • rate limiting and audit logging
  • kill-and-recycle strategy for suspicious runs

Node’s permission model is now stable and is specifically intended to restrict access to resources during execution. That makes it a useful outer control around the execution worker process. (Node.js)

So the recommendation is:

Do not run JavaScript evaluation in the Microsoft Agent Framework process. Run it in a separate hardened execution service, and inside that service use an isolate with only host-injected safe facades.

Prompting the agent correctly

One subtle mistake is giving the model too much freedom in how it uses the tool. Your tool description should bias toward deterministic use cases.

Good use cases:

  • schema normalization
  • mathematical calculations
  • JSON reshaping
  • derived field generation
  • deterministic validation helpers
  • short business-rule transforms

Bad use cases:

  • arbitrary web requests
  • importing unknown libraries
  • long-running workflows
  • anything requiring secret access
  • anything that should really be a reviewed backend feature

You want the tool to feel more like “dynamic formula execution” than “tiny remote code runner.”

Observability and governance

Once you add this capability, you need a paper trail.

Log:

  • agent name
  • conversation or run id
  • caller identity
  • code hash
  • requested namespaces
  • requested packages
  • approved namespaces
  • approved packages
  • execution duration
  • memory tier
  • success or failure
  • sanitized error output

Do not log secrets in payloads. Do log enough to reconstruct who ran what and under which policy.

This matters because the risk is no longer just technical. It is operational. A dynamic execution tool without auditability becomes impossible to govern at scale.

Where this pattern is worth it

This pattern is especially valuable when building agents that need deterministic computation without shipping a new backend endpoint for every micro-use-case.

Examples:

  • tax calculation helpers
  • document extraction post-processing
  • migration mapping rules
  • payroll normalization
  • dynamic scoring or threshold logic
  • transforming AI output into strict structured shapes

In all of those cases, JavaScript is the execution language, but policy is the product.

Final opinion

The wrong way to add JavaScript to an agent is to think of it as a convenience feature.

The right way is to think of it as a controlled runtime.

Microsoft Agent Framework gives you the right extension point through function tools. tRPC gives you a clean typed boundary. Node can host the execution service. But the part that separates a toy from a production design is this: never let the model execute inside your primary trust boundary, and never equate “sandboxed” with “safe” unless you can explain the exact layers doing the isolation. (Microsoft Learn)

That is the architecture I use.

Freeing the space used by Xcode with XcodeCleaner

With every release of Xcode it seems to eat more and more space. I was thrilled to find the XcodeCleaner project from Baye on Github.  This is a great project that allows you to tailor the space Xcode uses.  I’ve been able to free up an easy 12GB without any issues.

I highly recommend checking out the project on GitHub at XcodeCleaner. You can build the project from source or download from the macOS App Store for $0.99.
68747470733a2f2f7777772e6170706c652e636f6d2f6974756e65732f6c696e6b2f696d616765732f6c696e6b2d62616467652d61707073746f72652e706e67

screenshot

Solving Appcelerator Studio’s “Invalid Request” when Compiling

I use Titanium for many of my projects and have found it a great framework. Just like with any tools every once and awhile you run into some interesting errors. I am often switching networks and every once and awhile I get the below error in the Appcelerator Studio console window.

[INFO] :   Alloy compiled in 9.88357s
[INFO] :   Alloy compiler completed successfully
[TRACE] :  offline build file ...
[TRACE] :  online 1
[TRACE] :  sending request ...
[TRACE] :  result from /build-verify=> {"success":false,"error":"invalid request","code":"com.appcelerator.security.invalid.session"}, err=null
[ERROR] :  invalid request

This is an interesting warning as it means the Appcelerator CLI no longer has valid session information.  Great for security, but can be confusing as the error is not resolved by logging in and out of Appcelerator Studio.

This is easily resolved by the following:

  1. Open terminal
  2. Type appc logout, this will logout the Appcelerator CLI
  3. Type appc login, then enter your Appcelerator credentials

If you are using the CLI most likely will never see the above but for those of us that like to use Appcelerator Studio might find this little trick helpful.

Logging Exceptions to Google Analytics Using SwiftyBeaver

There has been an avalanche of Swift based logging frameworks lately.  After trying several, I found SwiftyBeaver was both flexible and simple enough to move all of my apps onto.  Similar to most of the other logging frameworks, SwifyBeaver is extensible. But, unlike many others, there is no real magic injected into the framework making it easy to read and hopefully maintain.  This is best demonstrated by how compacted it’s plugin system is.  You really only need to write you business specific information, the rest is handled for you.

I use Google Analytics for behavior tracking and basic exception tracking today.  Although all of my Google Analytics code is centralized exception reporting is handled explicitly.  With the move to SwiftyBeaver I wanted to see how reporting exceptions to Google Analytics could be handled automatically as part of my logging strategy.  To accomplish this I created the SwiftyBeaver Google Analytics Exception Logger, available here.  Just as the very long name indicates this plugin will automatically post error information to Google Analytics.

Before you start

The Google Analytics Exception plugin requires that you first install SwiftyBeaver and Google Analytics.  You can find information on how to do this below.

After both SwiftyBeaver and Google Analytics have been installed you need to copy the plugin into your project. Instructions for doing this are available here.  You’re now ready to start configuring logging in your project.

Creating your Logger

Creating a logger in your project is extremely simple. Typically you will want to add the below to the top of your AppDelegate.swift so you can use logging throughout your project.

import SwiftyBeaver
let logger = SwiftyBeaver.self

Adding the Google Analytics Logger Destination

Now that you have created your SwiftyBeaver logger you need to add destinations.  Without adding any destinations SwiftyBeaver wont actually do anything.  For this example I’m going to add two destinations. The first will be the built-in Console destination which simply writes to the Xcode console.

let console = ConsoleDestination()
logger.addDestination(console)

Next we’ll add the Google Analytics Exception Logger.  When creating the GABeaver plugin you must added your Google Analytics key.  This will be used when reporting Exceptions.  You can also specify the reporting threshold.  This parameter controls the minimum logging level that should be reported to Google Analytics as an exception. By default this is set to only report error levels or greater.  If you wanted to report warnings or higher you could simply provide a threshold of warning and the plugin will automatically send both warnings and errors.

Below illustrates how to add the Google Analytics Exception plugin with the default settings.

let gaErrorLogger = GABeaver(googleAnalyticsKey: &amp;quot;your GA Key&amp;quot;)
logger.addDestination(gaErrorLogger)

Optional Configurations

By default only Error Log messages will be sent to Google Analytics.  You can change this by setting the Threshold property on the logger.

For example, you can record all message with a level of WARNING or great by doing the following:

gaErrorLogger.Threshold = SwiftyBeaver.Level.Warning

You can also configure the logger to write to the console each time a message is logged. To enable console logging, set the printToConsole property to true as shown below:

gaErrorLogger.printToConsole = true

More Information

Additional information is available on github at SwiftyBeaver-GA-Exception-Logger.

SwiftyBeaver

Resetting the iOS Simulator

There seems to be several ways to reset or clear the iOS simulator ranging from pressing the “Reset Content and Settings..” button on the iOS simulator from deleting folder under ~/Library/Developer/CoreSimulator/DerivedData.

For me the easiest and faster approach for my workflow is to simply issue the below in terminal.

xcrun simctl erase all

Review : Xcode 4 Cookbook

Xcode

PACKT recently provided the opportunity for me to review the Xcode 4 Cookbook by Steven Daniel.  As primarily a Titanium developer I spend most of my time in Sublime Text or Titanium Studio so I thought it would provide a good opportunity to spend more time in Xcode. 

First, the book title is alittle misleading.  I found this book to be more about iOS development then Xcode itself.  The author does cover Xcode in detail, but always in the context of building or profiling an app.  This approach made it both easier to read and to remember.

The book starts with covering the basics and then moves into creating user interfaces.  Having built most of my UIs through code it I found the content on Interface Builder and Storyboards helpful.  The recipes are easy to follow and very detailed, perfect for beginners. 

My favorite section of the book is around instruments.  Again the author walks us through these recipes in a very detailed easy to follow way.  As a side note, you can use the same steps outlined in these recipes in your Titanium project. You just need to generate a full Xcode project using transport.py as detailed here.  Coming from a non Xcode background, the navigation and descriptions of the options available for profiling made Xcode’s sometimes challenging interface easy to understand.

The chapter on iCloud provided a great “getting started” guide with a clear easy to understand recipe showing on to implement Document based storage using NSUbiquitousKeyValueStore.  This was a good introduction to iCloud and very easy for someone new to iCloud to follow along.  I do wish the author would have gone further on this topic and provided more details on storing greater the 64k documents.

The recipe using CoreImage provides a nice introduction to the topic and discusses a few of the common available filters.  This recipe provided a nice transition from the earlier CoreGraphic recipe.  I do wish the author would have provides a multi filter example though.

Overall, I really enjoyed reading this book and found the recipes helpful.  The detail and direction provided by the author makes all of the recipes easy to follow and quick to perform.  If you are new to iOS development or Xcode this book provides the resources needed to tackle that learning curve quickly.

A Few Helpful Xcode Plugins

Xcode is one of those IDEs that you either love or hate ( sometimes both ).  Coming from a Visual Studio background it has taken awhile to get used to the Xcode workflow.  I’m still hopeful that someone will create a Resharper like tool for Xcode, but until then I wanted to share some of the plugins that I’ve found useful.

Completion Dictionary

This plugin provides enhanced auto completion.  I bounce between languages frequently and this makes remembering the specific syntax I’m thinking of much easier.

http://www.obdev.at/products/completion-dictionary/index.html

Code Pilot

Calling this Spotlight for for Xcode would be an understatement.  Once you get used to Code Pilot you will rarely leave the keyboard. In short navigation made easy.

http://codepilot.cc/

Accessorizer

This plugin seems to be the closest thing Xcode has to Resharper.  With 40+ code generation actions and a ton of help this is well worth the small purchase price.

http://www.kevincallahan.org/software/accessorizer.html

Color Sense

This visual plugin provides a color picker when working with UIColor ( and NSColor ).  I rarely use this plugin, but when I do it is a time saver.

https://github.com/omz/Colorsense-for-Xcode

KSImageNamed-Xcode

Can’t remember whether that image you just added to the project was called button-separator-left or button-left-separator? Now you don’t have to, because this will autocomplete your imageNamed: calls like you’d expect. Just type in [NSImage imageNamed: or [UIImage imageNamed: and all the images in your project will conveniently appear in the autocomplete menu.

Read more about this plugin on Kent Sutherlands blog.

On Github: https://github.com/ksuther/KSImageNamed-Xcode

XcodeColors

XcodeColors allows you to use colors in the Xcode debugging console.

https://github.com/robbiehanson/XcodeColors

HOStringSense

If you work with large strings in your code you will definitely want to check out this plugin.  You can enter your full string into this plugin’s UI and it will correctly escape and insert the string into your code.

https://github.com/holtwick/HOStringSense-for-Xcode

Bracket Matcher

This plugin automatically inserting paired message sending brackets. Maybe I’m missing something but shouldn’t Xcode do this for you by default?

https://github.com/ciaran/xcode-bracket-matcher

Fixins

I recently found a github project by Dave Keck that has several Xcode plugins are extremely useful.  My favorite is the CurrentLineHighligher but would recommend checking them all out.

  • CurrentLineHighlighter
  • DisableAnimations
  • FindFix
  • HideDistractions
  • InhibitTabNextPlaceholder
  • TabAcceptsCompletion

https://github.com/davekeck/Xcode-4-Fixins

MiniXCode

Just want to focus on the code? This plugin allows you to reduce the sometimes massive Xcode toolbars.

https://github.com/omz/MiniXcode

 

AppCode

A commenter (Nick) mentioned that JetBrains has a great IDE called AppCode available at http://www.jetbrains.com/objc.

iOS Simulator Switching Devices

With Apple’s latest products, including the iPhone 5 we now have to worry about more form factors then ever. As the method for switching the device type in Titanium Studio is still pretty clumsy I went in search of an earlier way.

The Appcelerator forums had a great post by Rob Gabbard with an AppleScript helper to do just what I was looking for. With a small update to add support for the Retina iPad and iPhone 5 I can now switch devices must easier.

I wanted to share a link to this script for anyone that might have missed the QA post. Unfortunately this doesn’t address the need for the app to be launched again when the device is changed.

set selectedDevices to choose from list {"iPhone", "iPhone (Retina 3.5-inch)", "iPhone (Retina 4-inch)", "iPad", "iPad (Retina)"} with prompt "Choose device type:" default items {"iPhone"} without multiple selections allowed
if selectedDevices is not false then
    set selectedDevice to item 1 of selectedDevices as string
    set thePListFolderPath to path to preferences folder from user domain as string
    set thePListPath to thePListFolderPath & "com.apple.iphonesimulator.plist"
    
    tell application "System Events"
        tell property list file thePListPath
            tell contents
                set value of property list item "SimulateDevice" to selectedDevice
            end tell
        end tell
    end tell
end if

View the the gist of the script.

Creating Keyboard Shortcuts for Switching Tabs in Titanium Studio

I’ve been using Titanium Studio quite a bit lately and noticed the default key mappings for switching tabs was not working for me.  Titanium Studio allows you to easily set your key bindings to anything.

Below are the steps I used to create my tab switching shortcuts.

Where is Key Mapping Preferences?

You can updated Titanium Studio’s key bindings by going to Titanium Studio then selecting the Preferences option as shown below.

PreferencesOption

Titanium Studio provides a large amount of preferences, you can tailor almost anything to your workflow.  The key binds are found under General then Keys.  If you have trouble finding this, just search for “keys” in the filter box.

PreferenceKeyOption

Mapping Next Tab

Once in the Keys option menu, scroll until you see “Next Tab”, you can also filter by “tab” and it will help you find this option.  Select the “Next Tab” as shown in orange below. Then enter “Command Page Up” in the binding section highlighted in blue. Once completed you will need to press the “Apply” button for this option to be immediately available in Titanium Studio.

NextTab

Mapping Previous Tab

Once in the Keys option menu, scroll until you see “Previous Tab”, you can also filter by “tab” and it will help you find this option.  Select the “Previous Tab” as shown in orange below. Then enter “Command Page Down” in the binding section highlighted in blue. Once completed you will need to press the “Apply” button for this option to be immediately available in Titanium Studio.

PreviousTab

Shortcuts in Action…

Now I can switch between tabs just like in my Eclipse projects.

Moving to the Next Tab

Tab1Selected

CMD  jean_victor_balin_add_blue  UAK

Tab2Selected

Moving to Previous Tab

Tab2Selected

CMD  jean_victor_balin_add_blue  DAK

Tab1Selected