Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "This is the main drm pull request for 4.5.  I don't think I've missed
  anything too major, I'm mostly back at work now but I'll probably get
  some sleep in 5 years time.

  Summary:

  New drivers:
   - etnaviv:

     GPU driver for the 3D core on the Vivante core used in numerous
     ARM boards.

  Highlights:

  Core:
   - Atomic suspend/resume helpers
   - Move the headers to using userspace friendlier types.
   - Documentation updates
   - Lots of struct_mutex removal.
   - Bunch of DP MST fixes from AMD.

  Panel:
   - More DSI helpers
   - Support for some new basic panels

  i915:
   - Basic Kabylake support
   - DP link training and detect code refactoring
   - fbc/psr fixes
   - FIFO underrun fixes
   - SDE interrupt handling fixes
   - dma-buf/fence support in pageflip path.
   - GPU side for MST audio support

  radeon/amdgpu:
   - Drop UMS support
   - GPUVM/Scheduler optimisations
   - Initial Powerplay support for Tonga/Fiji/CZ/ST
   - ACP audio prerequisites

  nouveau:
   - GK20a instmem improvements
   - PCIE link speed change support

  msm:
   - DSI support for msm8960/apq8064

  tegra:
   - Host1X support for Tegra210 SoC

  vc4:
   - 3D acceleration support

  armada:
   - Get rid of struct mutex

  tda998x:
   - Atomic modesetting support
   - TMDS clock limitations

  omapdrm:
   - Atomic modesetting support
   - improved TILER performance

  rockchip:
   - RK3036 VOP support
   - Atomic modesetting support
   - Synopsys DW MIPI DSI support

  exynos:
   - Runtime PM support
   - of_graph binding for DP panels
   - Cleanup of IPP code
   - Configurable plane support
   - Kernel panic fixes at release time"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (711 commits)
  drm/fb_cma_helper: Remove implicit call to disable_unused_functions
  drm/amdgpu: add missing irq.h include
  drm/vmwgfx: Fix a width / pitch mismatch on framebuffer updates
  drm/vmwgfx: Fix an incorrect lock check
  drm: nouveau: fix nouveau_debugfs_init prototype
  drm/nouveau/pci: fix check in nvkm_pcie_set_link
  drm/amdgpu: validate duplicates first
  drm/amdgpu: move VM page tables to the LRU end on CS v2
  drm/ttm: add ttm_bo_move_to_lru_tail function v2
  drm/ttm: fix adding foreign BOs to the swap LRU
  drm/ttm: fix adding foreign BOs to the LRU during init v2
  drm/radeon: use kobj_to_dev()
  drm/amdgpu: use kobj_to_dev()
  drm/amdgpu/cz: force vce clocks when sclks are forced
  drm/amdgpu/cz: force uvd clocks when sclks are forced
  drm/amdgpu/cz: add code to enable forcing VCE clocks
  drm/amdgpu/cz: add code to enable forcing UVD clocks
  drm/amdgpu: fix lost sync_to if scheduler is enabled.
  drm/amd/powerplay: fix static checker warning for return meaningless value.
  drm/sysfs: use kobj_to_dev()
  ...
This commit is contained in:
Linus Torvalds 2016-01-17 13:40:25 -08:00
commit 984065055e
711 changed files with 83802 additions and 27206 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,54 @@
Etnaviv DRM master device
=========================
The Etnaviv DRM master device is a virtual device needed to list all
Vivante GPU cores that comprise the GPU subsystem.
Required properties:
- compatible: Should be one of
"fsl,imx-gpu-subsystem"
"marvell,dove-gpu-subsystem"
- cores: Should contain a list of phandles pointing to Vivante GPU devices
example:
gpu-subsystem {
compatible = "fsl,imx-gpu-subsystem";
cores = <&gpu_2d>, <&gpu_3d>;
};
Vivante GPU core devices
========================
Required properties:
- compatible: Should be "vivante,gc"
A more specific compatible is not needed, as the cores contain chip
identification registers at fixed locations, which provide all the
necessary information to the driver.
- reg: should be register base and length as documented in the
datasheet
- interrupts: Should contain the cores interrupt line
- clocks: should contain one clock for entry in clock-names
see Documentation/devicetree/bindings/clock/clock-bindings.txt
- clock-names:
- "bus": AXI/register clock
- "core": GPU core clock
- "shader": Shader clock (only required if GPU has feature PIPE_3D)
Optional properties:
- power-domains: a power domain consumer specifier according to
Documentation/devicetree/bindings/power/power_domain.txt
example:
gpu_3d: gpu@00130000 {
compatible = "vivante,gc";
reg = <0x00130000 0x4000>;
interrupts = <0 9 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks IMX6QDL_CLK_GPU3D_AXI>,
<&clks IMX6QDL_CLK_GPU3D_CORE>,
<&clks IMX6QDL_CLK_GPU3D_SHADER>;
clock-names = "bus", "core", "shader";
power-domains = <&gpc 1>;
};

View File

@ -1,3 +1,20 @@
Device-Tree bindings for Samsung Exynos Embedded DisplayPort Transmitter(eDP)
DisplayPort is industry standard to accommodate the growing board adoption
of digital display technology within the PC and CE industries.
It consolidates the internal and external connection methods to reduce device
complexity and cost. It also supports necessary features for important cross
industry applications and provides performance scalability to enable the next
generation of displays that feature higher color depths, refresh rates, and
display resolutions.
eDP (embedded display port) device is compliant with Embedded DisplayPort
standard as follows,
- DisplayPort standard 1.1a for Exynos5250 and Exynos5260.
- DisplayPort standard 1.3 for Exynos5422s and Exynos5800.
eDP resides between FIMD and panel or FIMD and bridge such as LVDS.
The Exynos display port interface should be configured based on
the type of panel connected to it.
@ -66,8 +83,15 @@ Optional properties for dp-controller:
Hotplug detect GPIO.
Indicates which GPIO should be used for hotplug
detection
-video interfaces: Device node can contain video interface port
nodes according to [1].
Video interfaces:
Device node can contain video interface port nodes according to [1].
The following are properties specific to those nodes:
endpoint node connected to bridge or panel node:
- remote-endpoint: specifies the endpoint in panel or bridge node.
This node is required in all kinds of exynos dp
to represent the connection between dp and bridge
or dp and panel.
[1]: Documentation/devicetree/bindings/media/video-interfaces.txt
@ -111,9 +135,18 @@ Board Specific portion:
};
ports {
port@0 {
port {
dp_out: endpoint {
remote-endpoint = <&bridge_in>;
remote-endpoint = <&dp_in>;
};
};
};
panel {
...
port {
dp_in: endpoint {
remote-endpoint = <&dp_out>;
};
};
};

View File

@ -14,17 +14,20 @@ Required properties:
- clocks: device clocks
See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details.
- clock-names: the following clocks are required:
* "bus_clk"
* "byte_clk"
* "core_clk"
* "core_mmss_clk"
* "iface_clk"
* "mdp_core_clk"
* "iface_clk"
* "bus_clk"
* "core_mmss_clk"
* "byte_clk"
* "pixel_clk"
* "core_clk"
For DSIv2, we need an additional clock:
* "src_clk"
- vdd-supply: phandle to vdd regulator device node
- vddio-supply: phandle to vdd-io regulator device node
- vdda-supply: phandle to vdda regulator device node
- qcom,dsi-phy: phandle to DSI PHY device node
- syscon-sfpb: A phandle to mmss_sfpb syscon node (only for DSIv2)
Optional properties:
- panel@0: Node of panel connected to this DSI controller.
@ -51,6 +54,7 @@ Required properties:
* "qcom,dsi-phy-28nm-hpm"
* "qcom,dsi-phy-28nm-lp"
* "qcom,dsi-phy-20nm"
* "qcom,dsi-phy-28nm-8960"
- reg: Physical base address and length of the registers of PLL, PHY and PHY
regulator
- reg-names: The names of register regions. The following regions are required:

View File

@ -2,18 +2,28 @@ Qualcomm adreno/snapdragon display controller
Required properties:
- compatible:
* "qcom,mdp" - mdp4
* "qcom,mdp4" - mdp4
* "qcom,mdp5" - mdp5
- reg: Physical base address and length of the controller's registers.
- interrupts: The interrupt signal from the display controller.
- connectors: array of phandles for output device(s)
- clocks: device clocks
See ../clocks/clock-bindings.txt for details.
- clock-names: the following clocks are required:
* "core_clk"
* "iface_clk"
* "src_clk"
* "hdmi_clk"
* "mpd_clk"
- clock-names: the following clocks are required.
For MDP4:
* "core_clk"
* "iface_clk"
* "lut_clk"
* "src_clk"
* "hdmi_clk"
* "mdp_clk"
For MDP5:
* "bus_clk"
* "iface_clk"
* "core_clk_src"
* "core_clk"
* "lut_clk" (some MDP5 versions may not need this)
* "vsync_clk"
Optional properties:
- gpus: phandle for gpu device
@ -26,7 +36,7 @@ Example:
...
mdp: qcom,mdp@5100000 {
compatible = "qcom,mdp";
compatible = "qcom,mdp4";
reg = <0x05100000 0xf0000>;
interrupts = <GIC_SPI 75 0>;
connectors = <&hdmi>;

View File

@ -0,0 +1,7 @@
Boe Corporation 8.0" WUXGA TFT LCD panel
Required properties:
- compatible: should be "boe,tv080wum-nl0"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Innolux Corporation 12.1" G121X1-L03 XGA (1024x768) TFT LCD panel
Required properties:
- compatible: should be "innolux,g121x1-l03"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,7 @@
Kyocera Corporation 12.1" XGA (1024x768) TFT LCD panel
Required properties:
- compatible: should be "kyo,tcg121xglp"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,20 @@
Panasonic 10" WUXGA TFT LCD panel
Required properties:
- compatible: should be "panasonic,vvx10f034n00"
- reg: DSI virtual channel of the peripheral
- power-supply: phandle of the regulator that provides the supply voltage
Optional properties:
- backlight: phandle of the backlight device attached to the panel
Example:
mdss_dsi@fd922800 {
panel@0 {
compatible = "panasonic,vvx10f034n00";
reg = <0>;
power-supply = <&vreg_vsp>;
backlight = <&lp8566_wled>;
};
};

View File

@ -0,0 +1,7 @@
QiaoDian XianShi Corporation 4"3 TFT LCD panel
Required properties:
- compatible: should be "qiaodian,qd43003c0-40"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

View File

@ -0,0 +1,22 @@
Sharp Microelectronics 4.3" qHD TFT LCD panel
Required properties:
- compatible: should be "sharp,ls043t1le01-qhd"
- reg: DSI virtual channel of the peripheral
- power-supply: phandle of the regulator that provides the supply voltage
Optional properties:
- backlight: phandle of the backlight device attached to the panel
- reset-gpios: a GPIO spec for the reset pin
Example:
mdss_dsi@fd922800 {
panel@0 {
compatible = "sharp,ls043t1le01-qhd";
reg = <0>;
avdd-supply = <&pm8941_l22>;
backlight = <&pm8941_wled>;
reset-gpios = <&pm8941_gpios 19 GPIO_ACTIVE_HIGH>;
};
};

View File

@ -0,0 +1,60 @@
Rockchip specific extensions to the Synopsys Designware MIPI DSI
================================
Required properties:
- #address-cells: Should be <1>.
- #size-cells: Should be <0>.
- compatible: "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi".
- reg: Represent the physical address range of the controller.
- interrupts: Represent the controller's interrupt to the CPU(s).
- clocks, clock-names: Phandles to the controller's pll reference
clock(ref) and APB clock(pclk), as described in [1].
- rockchip,grf: this soc should set GRF regs to mux vopl/vopb.
- ports: contain a port node with endpoint definitions as defined in [2].
For vopb,set the reg = <0> and set the reg = <1> for vopl.
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/media/video-interfaces.txt
Example:
mipi_dsi: mipi@ff960000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi";
reg = <0xff960000 0x4000>;
interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru SCLK_MIPI_24M>, <&cru PCLK_MIPI_DSI0>;
clock-names = "ref", "pclk";
rockchip,grf = <&grf>;
status = "okay";
ports {
#address-cells = <1>;
#size-cells = <0>;
reg = <1>;
mipi_in: port {
#address-cells = <1>;
#size-cells = <0>;
mipi_in_vopb: endpoint@0 {
reg = <0>;
remote-endpoint = <&vopb_out_mipi>;
};
mipi_in_vopl: endpoint@1 {
reg = <1>;
remote-endpoint = <&vopl_out_mipi>;
};
};
};
panel {
compatible ="boe,tv080wum-nl0";
reg = <0>;
enable-gpios = <&gpio7 3 GPIO_ACTIVE_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&lcd_en>;
backlight = <&backlight>;
status = "okay";
};
};

View File

@ -7,6 +7,7 @@ buffer to an external LCD interface.
Required properties:
- compatible: value should be one of the following
"rockchip,rk3288-vop";
"rockchip,rk3036-vop";
- interrupts: should contain a list of all VOP IP block interrupts in the
order: VSYNC, LCD_SYSTEM. The interrupt specifier

View File

@ -7,6 +7,10 @@ Required properties:
- reg: should contain G-Scaler physical address location and length.
- interrupts: should contain G-Scaler interrupt number
Optional properties:
- samsung,sysreg: handle to syscon used to control the system registers to
set writeback input and destination
Example:
gsc_0: gsc@0x13e00000 {

View File

@ -33,6 +33,7 @@ auo AU Optronics Corporation
avago Avago Technologies
avic Shanghai AVIC Optoelectronics Co., Ltd.
axis Axis Communications AB
boe BOE Technology Group Co., Ltd.
bosch Bosch Sensortec GmbH
boundary Boundary Devices Inc.
brcm Broadcom Corporation
@ -123,6 +124,7 @@ jedec JEDEC Solid State Technology Association
karo Ka-Ro electronics GmbH
keymile Keymile GmbH
kinetic Kinetic Technologies
kyo Kyocera Corporation
lacie LaCie
lantiq Lantiq Semiconductor
lenovo Lenovo Group Ltd.
@ -181,6 +183,7 @@ qca Qualcomm Atheros, Inc.
qcom Qualcomm Technologies, Inc
qemu QEMU, a generic and open source machine emulator and virtualizer
qi Qi Hardware
qiaodian QiaoDian XianShi Corporation
qnap QNAP Systems, Inc.
radxa Radxa
raidsonic RaidSonic Technology GmbH
@ -239,6 +242,7 @@ v3 V3 Semiconductor
variscite Variscite Ltd.
via VIA Technologies, Inc.
virtio Virtual I/O Device Specification, developed by the OASIS consortium
vivante Vivante Corporation
voipac Voipac Technologies s.r.o.
wexler Wexler
winbond Winbond Electronics corp.

View File

@ -3768,6 +3768,15 @@ S: Maintained
F: drivers/gpu/drm/sti
F: Documentation/devicetree/bindings/display/st,stih4xx.txt
DRM DRIVERS FOR VIVANTE GPU IP
M: Lucas Stach <l.stach@pengutronix.de>
R: Russell King <linux+etnaviv@arm.linux.org.uk>
R: Christian Gmeiner <christian.gmeiner@gmail.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: drivers/gpu/drm/etnaviv
F: Documentation/devicetree/bindings/display/etnaviv
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org

View File

@ -122,6 +122,12 @@
compatible = "auo,b133htn01";
power-supply = <&tps65090_fet6>;
backlight = <&backlight>;
port {
panel_in: endpoint {
remote-endpoint = <&dp_out>;
};
};
};
mmc1_pwrseq: mmc1_pwrseq {
@ -148,7 +154,14 @@
samsung,link-rate = <0x0a>;
samsung,lane-count = <2>;
samsung,hpd-gpio = <&gpx2 6 GPIO_ACTIVE_HIGH>;
panel = <&panel>;
ports {
port {
dp_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
};
};
&fimd {

View File

@ -160,6 +160,7 @@ config DRM_AMDGPU
If M is selected, the module will be called amdgpu.
source "drivers/gpu/drm/amd/amdgpu/Kconfig"
source "drivers/gpu/drm/amd/powerplay/Kconfig"
source "drivers/gpu/drm/nouveau/Kconfig"
@ -266,3 +267,5 @@ source "drivers/gpu/drm/amd/amdkfd/Kconfig"
source "drivers/gpu/drm/imx/Kconfig"
source "drivers/gpu/drm/vc4/Kconfig"
source "drivers/gpu/drm/etnaviv/Kconfig"

View File

@ -75,3 +75,4 @@ obj-y += i2c/
obj-y += panel/
obj-y += bridge/
obj-$(CONFIG_DRM_FSL_DCU) += fsl-dcu/
obj-$(CONFIG_DRM_ETNAVIV) += etnaviv/

View File

@ -2,10 +2,13 @@
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
ccflags-y := -Iinclude/drm -Idrivers/gpu/drm/amd/include/asic_reg \
-Idrivers/gpu/drm/amd/include \
-Idrivers/gpu/drm/amd/amdgpu \
-Idrivers/gpu/drm/amd/scheduler
FULL_AMD_PATH=$(src)/..
ccflags-y := -Iinclude/drm -I$(FULL_AMD_PATH)/include/asic_reg \
-I$(FULL_AMD_PATH)/include \
-I$(FULL_AMD_PATH)/amdgpu \
-I$(FULL_AMD_PATH)/scheduler \
-I$(FULL_AMD_PATH)/powerplay/inc
amdgpu-y := amdgpu_drv.o
@ -44,6 +47,7 @@ amdgpu-y += \
# add SMC block
amdgpu-y += \
amdgpu_dpm.o \
amdgpu_powerplay.o \
cz_smc.o cz_dpm.o \
tonga_smc.o tonga_dpm.o \
fiji_smc.o fiji_dpm.o \
@ -94,6 +98,14 @@ amdgpu-$(CONFIG_VGA_SWITCHEROO) += amdgpu_atpx_handler.o
amdgpu-$(CONFIG_ACPI) += amdgpu_acpi.o
amdgpu-$(CONFIG_MMU_NOTIFIER) += amdgpu_mn.o
ifneq ($(CONFIG_DRM_AMD_POWERPLAY),)
include $(FULL_AMD_PATH)/powerplay/Makefile
amdgpu-y += $(AMD_POWERPLAY_FILES)
endif
obj-$(CONFIG_DRM_AMDGPU)+= amdgpu.o
CFLAGS_amdgpu_trace_points.o := -I$(src)

View File

@ -52,6 +52,7 @@
#include "amdgpu_irq.h"
#include "amdgpu_ucode.h"
#include "amdgpu_gds.h"
#include "amd_powerplay.h"
#include "gpu_scheduler.h"
@ -85,6 +86,7 @@ extern int amdgpu_enable_scheduler;
extern int amdgpu_sched_jobs;
extern int amdgpu_sched_hw_submission;
extern int amdgpu_enable_semaphores;
extern int amdgpu_powerplay;
#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
#define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */
@ -918,8 +920,8 @@ struct amdgpu_ring {
#define AMDGPU_VM_FAULT_STOP_ALWAYS 2
struct amdgpu_vm_pt {
struct amdgpu_bo *bo;
uint64_t addr;
struct amdgpu_bo_list_entry entry;
uint64_t addr;
};
struct amdgpu_vm_id {
@ -981,9 +983,12 @@ struct amdgpu_vm_manager {
void amdgpu_vm_manager_fini(struct amdgpu_device *adev);
int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm);
void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm);
struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev,
struct amdgpu_vm *vm,
struct list_head *head);
void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
struct list_head *validated,
struct amdgpu_bo_list_entry *entry);
void amdgpu_vm_get_pt_bos(struct amdgpu_vm *vm, struct list_head *duplicates);
void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
struct amdgpu_vm *vm);
int amdgpu_vm_grab_id(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
struct amdgpu_sync *sync);
void amdgpu_vm_flush(struct amdgpu_ring *ring,
@ -1024,11 +1029,9 @@ int amdgpu_vm_free_job(struct amdgpu_job *job);
* context related structures
*/
#define AMDGPU_CTX_MAX_CS_PENDING 16
struct amdgpu_ctx_ring {
uint64_t sequence;
struct fence *fences[AMDGPU_CTX_MAX_CS_PENDING];
struct fence **fences;
struct amd_sched_entity entity;
};
@ -1037,6 +1040,7 @@ struct amdgpu_ctx {
struct amdgpu_device *adev;
unsigned reset_counter;
spinlock_t ring_lock;
struct fence **fences;
struct amdgpu_ctx_ring rings[AMDGPU_MAX_RINGS];
};
@ -1047,7 +1051,7 @@ struct amdgpu_ctx_mgr {
struct idr ctx_handles;
};
int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
int amdgpu_ctx_init(struct amdgpu_device *adev, enum amd_sched_priority pri,
struct amdgpu_ctx *ctx);
void amdgpu_ctx_fini(struct amdgpu_ctx *ctx);
@ -1254,7 +1258,7 @@ struct amdgpu_cs_parser {
unsigned nchunks;
struct amdgpu_cs_chunk *chunks;
/* relocations */
struct amdgpu_bo_list_entry *vm_bos;
struct amdgpu_bo_list_entry vm_pd;
struct list_head validated;
struct fence *fence;
@ -1301,31 +1305,7 @@ struct amdgpu_wb {
int amdgpu_wb_get(struct amdgpu_device *adev, u32 *wb);
void amdgpu_wb_free(struct amdgpu_device *adev, u32 wb);
/**
* struct amdgpu_pm - power management datas
* It keeps track of various data needed to take powermanagement decision.
*/
enum amdgpu_pm_state_type {
/* not used for dpm */
POWER_STATE_TYPE_DEFAULT,
POWER_STATE_TYPE_POWERSAVE,
/* user selectable states */
POWER_STATE_TYPE_BATTERY,
POWER_STATE_TYPE_BALANCED,
POWER_STATE_TYPE_PERFORMANCE,
/* internal states */
POWER_STATE_TYPE_INTERNAL_UVD,
POWER_STATE_TYPE_INTERNAL_UVD_SD,
POWER_STATE_TYPE_INTERNAL_UVD_HD,
POWER_STATE_TYPE_INTERNAL_UVD_HD2,
POWER_STATE_TYPE_INTERNAL_UVD_MVC,
POWER_STATE_TYPE_INTERNAL_BOOT,
POWER_STATE_TYPE_INTERNAL_THERMAL,
POWER_STATE_TYPE_INTERNAL_ACPI,
POWER_STATE_TYPE_INTERNAL_ULV,
POWER_STATE_TYPE_INTERNAL_3DPERF,
};
enum amdgpu_int_thermal_type {
THERMAL_TYPE_NONE,
@ -1607,8 +1587,8 @@ struct amdgpu_dpm {
/* vce requirements */
struct amdgpu_vce_state vce_states[AMDGPU_MAX_VCE_LEVELS];
enum amdgpu_vce_level vce_level;
enum amdgpu_pm_state_type state;
enum amdgpu_pm_state_type user_state;
enum amd_pm_state_type state;
enum amd_pm_state_type user_state;
u32 platform_caps;
u32 voltage_response_time;
u32 backbias_response_time;
@ -1661,8 +1641,13 @@ struct amdgpu_pm {
const struct firmware *fw; /* SMC firmware */
uint32_t fw_version;
const struct amdgpu_dpm_funcs *funcs;
uint32_t pcie_gen_mask;
uint32_t pcie_mlw_mask;
struct amd_pp_display_configuration pm_display_cfg;/* set by DAL */
};
void amdgpu_get_pcie_info(struct amdgpu_device *adev);
/*
* UVD
*/
@ -1830,6 +1815,8 @@ struct amdgpu_cu_info {
*/
struct amdgpu_asic_funcs {
bool (*read_disabled_bios)(struct amdgpu_device *adev);
bool (*read_bios_from_rom)(struct amdgpu_device *adev,
u8 *bios, u32 length_bytes);
int (*read_register)(struct amdgpu_device *adev, u32 se_num,
u32 sh_num, u32 reg_offset, u32 *value);
void (*set_vga_state)(struct amdgpu_device *adev, bool state);
@ -2060,6 +2047,10 @@ struct amdgpu_device {
/* interrupts */
struct amdgpu_irq irq;
/* powerplay */
struct amd_powerplay powerplay;
bool pp_enabled;
/* dpm */
struct amdgpu_pm pm;
u32 cg_flags;
@ -2236,6 +2227,7 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
#define amdgpu_asic_set_vce_clocks(adev, ev, ec) (adev)->asic_funcs->set_vce_clocks((adev), (ev), (ec))
#define amdgpu_asic_get_gpu_clock_counter(adev) (adev)->asic_funcs->get_gpu_clock_counter((adev))
#define amdgpu_asic_read_disabled_bios(adev) (adev)->asic_funcs->read_disabled_bios((adev))
#define amdgpu_asic_read_bios_from_rom(adev, b, l) (adev)->asic_funcs->read_bios_from_rom((adev), (b), (l))
#define amdgpu_asic_read_register(adev, se, sh, offset, v)((adev)->asic_funcs->read_register((adev), (se), (sh), (offset), (v)))
#define amdgpu_asic_get_cu_info(adev, info) (adev)->asic_funcs->get_cu_info((adev), (info))
#define amdgpu_gart_flush_gpu_tlb(adev, vmid) (adev)->gart.gart_funcs->flush_gpu_tlb((adev), (vmid))
@ -2277,24 +2269,78 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
#define amdgpu_display_resume_mc_access(adev, s) (adev)->mode_info.funcs->resume_mc_access((adev), (s))
#define amdgpu_emit_copy_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_copy_buffer((ib), (s), (d), (b))
#define amdgpu_emit_fill_buffer(adev, ib, s, d, b) (adev)->mman.buffer_funcs->emit_fill_buffer((ib), (s), (d), (b))
#define amdgpu_dpm_get_temperature(adev) (adev)->pm.funcs->get_temperature((adev))
#define amdgpu_dpm_pre_set_power_state(adev) (adev)->pm.funcs->pre_set_power_state((adev))
#define amdgpu_dpm_set_power_state(adev) (adev)->pm.funcs->set_power_state((adev))
#define amdgpu_dpm_post_set_power_state(adev) (adev)->pm.funcs->post_set_power_state((adev))
#define amdgpu_dpm_display_configuration_changed(adev) (adev)->pm.funcs->display_configuration_changed((adev))
#define amdgpu_dpm_get_sclk(adev, l) (adev)->pm.funcs->get_sclk((adev), (l))
#define amdgpu_dpm_get_mclk(adev, l) (adev)->pm.funcs->get_mclk((adev), (l))
#define amdgpu_dpm_print_power_state(adev, ps) (adev)->pm.funcs->print_power_state((adev), (ps))
#define amdgpu_dpm_debugfs_print_current_performance_level(adev, m) (adev)->pm.funcs->debugfs_print_current_performance_level((adev), (m))
#define amdgpu_dpm_force_performance_level(adev, l) (adev)->pm.funcs->force_performance_level((adev), (l))
#define amdgpu_dpm_vblank_too_short(adev) (adev)->pm.funcs->vblank_too_short((adev))
#define amdgpu_dpm_powergate_uvd(adev, g) (adev)->pm.funcs->powergate_uvd((adev), (g))
#define amdgpu_dpm_powergate_vce(adev, g) (adev)->pm.funcs->powergate_vce((adev), (g))
#define amdgpu_dpm_enable_bapm(adev, e) (adev)->pm.funcs->enable_bapm((adev), (e))
#define amdgpu_dpm_set_fan_control_mode(adev, m) (adev)->pm.funcs->set_fan_control_mode((adev), (m))
#define amdgpu_dpm_get_fan_control_mode(adev) (adev)->pm.funcs->get_fan_control_mode((adev))
#define amdgpu_dpm_set_fan_speed_percent(adev, s) (adev)->pm.funcs->set_fan_speed_percent((adev), (s))
#define amdgpu_dpm_get_fan_speed_percent(adev, s) (adev)->pm.funcs->get_fan_speed_percent((adev), (s))
#define amdgpu_dpm_get_temperature(adev) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->get_temperature((adev)->powerplay.pp_handle) : \
(adev)->pm.funcs->get_temperature((adev))
#define amdgpu_dpm_set_fan_control_mode(adev, m) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->set_fan_control_mode((adev)->powerplay.pp_handle, (m)) : \
(adev)->pm.funcs->set_fan_control_mode((adev), (m))
#define amdgpu_dpm_get_fan_control_mode(adev) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->get_fan_control_mode((adev)->powerplay.pp_handle) : \
(adev)->pm.funcs->get_fan_control_mode((adev))
#define amdgpu_dpm_set_fan_speed_percent(adev, s) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->set_fan_speed_percent((adev)->powerplay.pp_handle, (s)) : \
(adev)->pm.funcs->set_fan_speed_percent((adev), (s))
#define amdgpu_dpm_get_fan_speed_percent(adev, s) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->get_fan_speed_percent((adev)->powerplay.pp_handle, (s)) : \
(adev)->pm.funcs->get_fan_speed_percent((adev), (s))
#define amdgpu_dpm_get_sclk(adev, l) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->get_sclk((adev)->powerplay.pp_handle, (l)) : \
(adev)->pm.funcs->get_sclk((adev), (l))
#define amdgpu_dpm_get_mclk(adev, l) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->get_mclk((adev)->powerplay.pp_handle, (l)) : \
(adev)->pm.funcs->get_mclk((adev), (l))
#define amdgpu_dpm_force_performance_level(adev, l) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->force_performance_level((adev)->powerplay.pp_handle, (l)) : \
(adev)->pm.funcs->force_performance_level((adev), (l))
#define amdgpu_dpm_powergate_uvd(adev, g) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->powergate_uvd((adev)->powerplay.pp_handle, (g)) : \
(adev)->pm.funcs->powergate_uvd((adev), (g))
#define amdgpu_dpm_powergate_vce(adev, g) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->powergate_vce((adev)->powerplay.pp_handle, (g)) : \
(adev)->pm.funcs->powergate_vce((adev), (g))
#define amdgpu_dpm_debugfs_print_current_performance_level(adev, m) \
(adev)->pp_enabled ? \
(adev)->powerplay.pp_funcs->print_current_performance_level((adev)->powerplay.pp_handle, (m)) : \
(adev)->pm.funcs->debugfs_print_current_performance_level((adev), (m))
#define amdgpu_dpm_get_current_power_state(adev) \
(adev)->powerplay.pp_funcs->get_current_power_state((adev)->powerplay.pp_handle)
#define amdgpu_dpm_get_performance_level(adev) \
(adev)->powerplay.pp_funcs->get_performance_level((adev)->powerplay.pp_handle)
#define amdgpu_dpm_dispatch_task(adev, event_id, input, output) \
(adev)->powerplay.pp_funcs->dispatch_tasks((adev)->powerplay.pp_handle, (event_id), (input), (output))
#define amdgpu_gds_switch(adev, r, v, d, w, a) (adev)->gds.funcs->patch_gds_switch((r), (v), (d), (w), (a))

View File

@ -29,66 +29,10 @@
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include "amdgpu.h"
#include "amdgpu_acpi.h"
#include "amd_acpi.h"
#include "atom.h"
#define ACPI_AC_CLASS "ac_adapter"
extern void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev);
struct atif_verify_interface {
u16 size; /* structure size in bytes (includes size field) */
u16 version; /* version */
u32 notification_mask; /* supported notifications mask */
u32 function_bits; /* supported functions bit vector */
} __packed;
struct atif_system_params {
u16 size; /* structure size in bytes (includes size field) */
u32 valid_mask; /* valid flags mask */
u32 flags; /* flags */
u8 command_code; /* notify command code */
} __packed;
struct atif_sbios_requests {
u16 size; /* structure size in bytes (includes size field) */
u32 pending; /* pending sbios requests */
u8 panel_exp_mode; /* panel expansion mode */
u8 thermal_gfx; /* thermal state: target gfx controller */
u8 thermal_state; /* thermal state: state id (0: exit state, non-0: state) */
u8 forced_power_gfx; /* forced power state: target gfx controller */
u8 forced_power_state; /* forced power state: state id */
u8 system_power_src; /* system power source */
u8 backlight_level; /* panel backlight level (0-255) */
} __packed;
#define ATIF_NOTIFY_MASK 0x3
#define ATIF_NOTIFY_NONE 0
#define ATIF_NOTIFY_81 1
#define ATIF_NOTIFY_N 2
struct atcs_verify_interface {
u16 size; /* structure size in bytes (includes size field) */
u16 version; /* version */
u32 function_bits; /* supported functions bit vector */
} __packed;
#define ATCS_VALID_FLAGS_MASK 0x3
struct atcs_pref_req_input {
u16 size; /* structure size in bytes (includes size field) */
u16 client_id; /* client id (bit 2-0: func num, 7-3: dev num, 15-8: bus num) */
u16 valid_flags_mask; /* valid flags mask */
u16 flags; /* flags */
u8 req_type; /* request type */
u8 perf_req; /* performance request */
} __packed;
struct atcs_pref_req_output {
u16 size; /* structure size in bytes (includes size field) */
u8 ret_val; /* return value */
} __packed;
/* Call the ATIF method
*/
/**

View File

@ -11,7 +11,7 @@
#include <linux/acpi.h>
#include <linux/pci.h>
#include "amdgpu_acpi.h"
#include "amd_acpi.h"
struct amdgpu_atpx_functions {
bool px_params;

View File

@ -35,6 +35,13 @@
* BIOS.
*/
#define AMD_VBIOS_SIGNATURE " 761295520"
#define AMD_VBIOS_SIGNATURE_OFFSET 0x30
#define AMD_VBIOS_SIGNATURE_SIZE sizeof(AMD_VBIOS_SIGNATURE)
#define AMD_VBIOS_SIGNATURE_END (AMD_VBIOS_SIGNATURE_OFFSET + AMD_VBIOS_SIGNATURE_SIZE)
#define AMD_IS_VALID_VBIOS(p) ((p)[0] == 0x55 && (p)[1] == 0xAA)
#define AMD_VBIOS_LENGTH(p) ((p)[2] << 9)
/* If you boot an IGP board with a discrete card as the primary,
* the IGP rom is not accessible via the rom bar as the IGP rom is
* part of the system bios. On boot, the system bios puts a
@ -58,7 +65,7 @@ static bool igp_read_bios_from_vram(struct amdgpu_device *adev)
return false;
}
if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
if (size == 0 || !AMD_IS_VALID_VBIOS(bios)) {
iounmap(bios);
return false;
}
@ -74,7 +81,7 @@ static bool igp_read_bios_from_vram(struct amdgpu_device *adev)
bool amdgpu_read_bios(struct amdgpu_device *adev)
{
uint8_t __iomem *bios, val1, val2;
uint8_t __iomem *bios, val[2];
size_t size;
adev->bios = NULL;
@ -84,10 +91,10 @@ bool amdgpu_read_bios(struct amdgpu_device *adev)
return false;
}
val1 = readb(&bios[0]);
val2 = readb(&bios[1]);
val[0] = readb(&bios[0]);
val[1] = readb(&bios[1]);
if (size == 0 || val1 != 0x55 || val2 != 0xaa) {
if (size == 0 || !AMD_IS_VALID_VBIOS(val)) {
pci_unmap_rom(adev->pdev, bios);
return false;
}
@ -101,6 +108,38 @@ bool amdgpu_read_bios(struct amdgpu_device *adev)
return true;
}
static bool amdgpu_read_bios_from_rom(struct amdgpu_device *adev)
{
u8 header[AMD_VBIOS_SIGNATURE_END+1] = {0};
int len;
if (!adev->asic_funcs->read_bios_from_rom)
return false;
/* validate VBIOS signature */
if (amdgpu_asic_read_bios_from_rom(adev, &header[0], sizeof(header)) == false)
return false;
header[AMD_VBIOS_SIGNATURE_END] = 0;
if ((!AMD_IS_VALID_VBIOS(header)) ||
0 != memcmp((char *)&header[AMD_VBIOS_SIGNATURE_OFFSET],
AMD_VBIOS_SIGNATURE,
strlen(AMD_VBIOS_SIGNATURE)))
return false;
/* valid vbios, go on */
len = AMD_VBIOS_LENGTH(header);
len = ALIGN(len, 4);
adev->bios = kmalloc(len, GFP_KERNEL);
if (!adev->bios) {
DRM_ERROR("no memory to allocate for BIOS\n");
return false;
}
/* read complete BIOS */
return amdgpu_asic_read_bios_from_rom(adev, adev->bios, len);
}
static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
{
uint8_t __iomem *bios;
@ -113,7 +152,7 @@ static bool amdgpu_read_platform_bios(struct amdgpu_device *adev)
return false;
}
if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) {
if (size == 0 || !AMD_IS_VALID_VBIOS(bios)) {
return false;
}
adev->bios = kmemdup(bios, size, GFP_KERNEL);
@ -230,7 +269,7 @@ static bool amdgpu_atrm_get_bios(struct amdgpu_device *adev)
break;
}
if (i == 0 || adev->bios[0] != 0x55 || adev->bios[1] != 0xaa) {
if (i == 0 || !AMD_IS_VALID_VBIOS(adev->bios)) {
kfree(adev->bios);
return false;
}
@ -319,6 +358,9 @@ bool amdgpu_get_bios(struct amdgpu_device *adev)
r = igp_read_bios_from_vram(adev);
if (r == false)
r = amdgpu_read_bios(adev);
if (r == false) {
r = amdgpu_read_bios_from_rom(adev);
}
if (r == false) {
r = amdgpu_read_disabled_bios(adev);
}
@ -330,7 +372,7 @@ bool amdgpu_get_bios(struct amdgpu_device *adev)
adev->bios = NULL;
return false;
}
if (adev->bios[0] != 0x55 || adev->bios[1] != 0xaa) {
if (!AMD_IS_VALID_VBIOS(adev->bios)) {
printk("BIOS signature incorrect %x %x\n", adev->bios[0], adev->bios[1]);
goto free_bios;
}

View File

@ -24,6 +24,7 @@
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/pci.h>
#include <linux/acpi.h>
#include <drm/drmP.h>
#include <linux/firmware.h>
#include <drm/amdgpu_drm.h>
@ -32,7 +33,6 @@
#include "atom.h"
#include "amdgpu_ucode.h"
struct amdgpu_cgs_device {
struct cgs_device base;
struct amdgpu_device *adev;
@ -398,6 +398,41 @@ static void amdgpu_cgs_write_pci_config_dword(void *cgs_device, unsigned addr,
WARN(ret, "pci_write_config_dword error");
}
static int amdgpu_cgs_get_pci_resource(void *cgs_device,
enum cgs_resource_type resource_type,
uint64_t size,
uint64_t offset,
uint64_t *resource_base)
{
CGS_FUNC_ADEV;
if (resource_base == NULL)
return -EINVAL;
switch (resource_type) {
case CGS_RESOURCE_TYPE_MMIO:
if (adev->rmmio_size == 0)
return -ENOENT;
if ((offset + size) > adev->rmmio_size)
return -EINVAL;
*resource_base = adev->rmmio_base;
return 0;
case CGS_RESOURCE_TYPE_DOORBELL:
if (adev->doorbell.size == 0)
return -ENOENT;
if ((offset + size) > adev->doorbell.size)
return -EINVAL;
*resource_base = adev->doorbell.base;
return 0;
case CGS_RESOURCE_TYPE_FB:
case CGS_RESOURCE_TYPE_IO:
case CGS_RESOURCE_TYPE_ROM:
default:
return -EINVAL;
}
}
static const void *amdgpu_cgs_atom_get_data_table(void *cgs_device,
unsigned table, uint16_t *size,
uint8_t *frev, uint8_t *crev)
@ -703,6 +738,9 @@ static int amdgpu_cgs_get_firmware_info(void *cgs_device,
case CHIP_TONGA:
strcpy(fw_name, "amdgpu/tonga_smc.bin");
break;
case CHIP_FIJI:
strcpy(fw_name, "amdgpu/fiji_smc.bin");
break;
default:
DRM_ERROR("SMC firmware not supported\n");
return -EINVAL;
@ -736,6 +774,288 @@ static int amdgpu_cgs_get_firmware_info(void *cgs_device,
return 0;
}
static int amdgpu_cgs_query_system_info(void *cgs_device,
struct cgs_system_info *sys_info)
{
CGS_FUNC_ADEV;
if (NULL == sys_info)
return -ENODEV;
if (sizeof(struct cgs_system_info) != sys_info->size)
return -ENODEV;
switch (sys_info->info_id) {
case CGS_SYSTEM_INFO_ADAPTER_BDF_ID:
sys_info->value = adev->pdev->devfn | (adev->pdev->bus->number << 8);
break;
case CGS_SYSTEM_INFO_PCIE_GEN_INFO:
sys_info->value = adev->pm.pcie_gen_mask;
break;
case CGS_SYSTEM_INFO_PCIE_MLW:
sys_info->value = adev->pm.pcie_mlw_mask;
break;
default:
return -ENODEV;
}
return 0;
}
static int amdgpu_cgs_get_active_displays_info(void *cgs_device,
struct cgs_display_info *info)
{
CGS_FUNC_ADEV;
struct amdgpu_crtc *amdgpu_crtc;
struct drm_device *ddev = adev->ddev;
struct drm_crtc *crtc;
uint32_t line_time_us, vblank_lines;
if (info == NULL)
return -EINVAL;
if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
list_for_each_entry(crtc,
&ddev->mode_config.crtc_list, head) {
amdgpu_crtc = to_amdgpu_crtc(crtc);
if (crtc->enabled) {
info->active_display_mask |= (1 << amdgpu_crtc->crtc_id);
info->display_count++;
}
if (info->mode_info != NULL &&
crtc->enabled && amdgpu_crtc->enabled &&
amdgpu_crtc->hw_mode.clock) {
line_time_us = (amdgpu_crtc->hw_mode.crtc_htotal * 1000) /
amdgpu_crtc->hw_mode.clock;
vblank_lines = amdgpu_crtc->hw_mode.crtc_vblank_end -
amdgpu_crtc->hw_mode.crtc_vdisplay +
(amdgpu_crtc->v_border * 2);
info->mode_info->vblank_time_us = vblank_lines * line_time_us;
info->mode_info->refresh_rate = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
info->mode_info->ref_clock = adev->clock.spll.reference_freq;
info->mode_info++;
}
}
}
return 0;
}
/** \brief evaluate acpi namespace object, handle or pathname must be valid
* \param cgs_device
* \param info input/output arguments for the control method
* \return status
*/
#if defined(CONFIG_ACPI)
static int amdgpu_cgs_acpi_eval_object(void *cgs_device,
struct cgs_acpi_method_info *info)
{
CGS_FUNC_ADEV;
acpi_handle handle;
struct acpi_object_list input;
struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *params = NULL;
union acpi_object *obj = NULL;
uint8_t name[5] = {'\0'};
struct cgs_acpi_method_argument *argument = NULL;
uint32_t i, count;
acpi_status status;
int result;
uint32_t func_no = 0xFFFFFFFF;
handle = ACPI_HANDLE(&adev->pdev->dev);
if (!handle)
return -ENODEV;
memset(&input, 0, sizeof(struct acpi_object_list));
/* validate input info */
if (info->size != sizeof(struct cgs_acpi_method_info))
return -EINVAL;
input.count = info->input_count;
if (info->input_count > 0) {
if (info->pinput_argument == NULL)
return -EINVAL;
argument = info->pinput_argument;
func_no = argument->value;
for (i = 0; i < info->input_count; i++) {
if (((argument->type == ACPI_TYPE_STRING) ||
(argument->type == ACPI_TYPE_BUFFER)) &&
(argument->pointer == NULL))
return -EINVAL;
argument++;
}
}
if (info->output_count > 0) {
if (info->poutput_argument == NULL)
return -EINVAL;
argument = info->poutput_argument;
for (i = 0; i < info->output_count; i++) {
if (((argument->type == ACPI_TYPE_STRING) ||
(argument->type == ACPI_TYPE_BUFFER))
&& (argument->pointer == NULL))
return -EINVAL;
argument++;
}
}
/* The path name passed to acpi_evaluate_object should be null terminated */
if ((info->field & CGS_ACPI_FIELD_METHOD_NAME) != 0) {
strncpy(name, (char *)&(info->name), sizeof(uint32_t));
name[4] = '\0';
}
/* parse input parameters */
if (input.count > 0) {
input.pointer = params =
kzalloc(sizeof(union acpi_object) * input.count, GFP_KERNEL);
if (params == NULL)
return -EINVAL;
argument = info->pinput_argument;
for (i = 0; i < input.count; i++) {
params->type = argument->type;
switch (params->type) {
case ACPI_TYPE_INTEGER:
params->integer.value = argument->value;
break;
case ACPI_TYPE_STRING:
params->string.length = argument->method_length;
params->string.pointer = argument->pointer;
break;
case ACPI_TYPE_BUFFER:
params->buffer.length = argument->method_length;
params->buffer.pointer = argument->pointer;
break;
default:
break;
}
params++;
argument++;
}
}
/* parse output info */
count = info->output_count;
argument = info->poutput_argument;
/* evaluate the acpi method */
status = acpi_evaluate_object(handle, name, &input, &output);
if (ACPI_FAILURE(status)) {
result = -EIO;
goto error;
}
/* return the output info */
obj = output.pointer;
if (count > 1) {
if ((obj->type != ACPI_TYPE_PACKAGE) ||
(obj->package.count != count)) {
result = -EIO;
goto error;
}
params = obj->package.elements;
} else
params = obj;
if (params == NULL) {
result = -EIO;
goto error;
}
for (i = 0; i < count; i++) {
if (argument->type != params->type) {
result = -EIO;
goto error;
}
switch (params->type) {
case ACPI_TYPE_INTEGER:
argument->value = params->integer.value;
break;
case ACPI_TYPE_STRING:
if ((params->string.length != argument->data_length) ||
(params->string.pointer == NULL)) {
result = -EIO;
goto error;
}
strncpy(argument->pointer,
params->string.pointer,
params->string.length);
break;
case ACPI_TYPE_BUFFER:
if (params->buffer.pointer == NULL) {
result = -EIO;
goto error;
}
memcpy(argument->pointer,
params->buffer.pointer,
argument->data_length);
break;
default:
break;
}
argument++;
params++;
}
error:
if (obj != NULL)
kfree(obj);
kfree((void *)input.pointer);
return result;
}
#else
static int amdgpu_cgs_acpi_eval_object(void *cgs_device,
struct cgs_acpi_method_info *info)
{
return -EIO;
}
#endif
int amdgpu_cgs_call_acpi_method(void *cgs_device,
uint32_t acpi_method,
uint32_t acpi_function,
void *pinput, void *poutput,
uint32_t output_count,
uint32_t input_size,
uint32_t output_size)
{
struct cgs_acpi_method_argument acpi_input[2] = { {0}, {0} };
struct cgs_acpi_method_argument acpi_output = {0};
struct cgs_acpi_method_info info = {0};
acpi_input[0].type = CGS_ACPI_TYPE_INTEGER;
acpi_input[0].method_length = sizeof(uint32_t);
acpi_input[0].data_length = sizeof(uint32_t);
acpi_input[0].value = acpi_function;
acpi_input[1].type = CGS_ACPI_TYPE_BUFFER;
acpi_input[1].method_length = CGS_ACPI_MAX_BUFFER_SIZE;
acpi_input[1].data_length = input_size;
acpi_input[1].pointer = pinput;
acpi_output.type = CGS_ACPI_TYPE_BUFFER;
acpi_output.method_length = CGS_ACPI_MAX_BUFFER_SIZE;
acpi_output.data_length = output_size;
acpi_output.pointer = poutput;
info.size = sizeof(struct cgs_acpi_method_info);
info.field = CGS_ACPI_FIELD_METHOD_NAME | CGS_ACPI_FIELD_INPUT_ARGUMENT_COUNT;
info.input_count = 2;
info.name = acpi_method;
info.pinput_argument = acpi_input;
info.output_count = output_count;
info.poutput_argument = &acpi_output;
return amdgpu_cgs_acpi_eval_object(cgs_device, &info);
}
static const struct cgs_ops amdgpu_cgs_ops = {
amdgpu_cgs_gpu_mem_info,
amdgpu_cgs_gmap_kmem,
@ -756,6 +1076,7 @@ static const struct cgs_ops amdgpu_cgs_ops = {
amdgpu_cgs_write_pci_config_byte,
amdgpu_cgs_write_pci_config_word,
amdgpu_cgs_write_pci_config_dword,
amdgpu_cgs_get_pci_resource,
amdgpu_cgs_atom_get_data_table,
amdgpu_cgs_atom_get_cmd_table_revs,
amdgpu_cgs_atom_exec_cmd_table,
@ -768,7 +1089,10 @@ static const struct cgs_ops amdgpu_cgs_ops = {
amdgpu_cgs_set_camera_voltages,
amdgpu_cgs_get_firmware_info,
amdgpu_cgs_set_powergating_state,
amdgpu_cgs_set_clockgating_state
amdgpu_cgs_set_clockgating_state,
amdgpu_cgs_get_active_displays_info,
amdgpu_cgs_call_acpi_method,
amdgpu_cgs_query_system_info,
};
static const struct cgs_os_ops amdgpu_cgs_os_ops = {

View File

@ -406,8 +406,8 @@ static int amdgpu_cs_parser_relocs(struct amdgpu_cs_parser *p)
amdgpu_cs_buckets_get_list(&buckets, &p->validated);
}
p->vm_bos = amdgpu_vm_get_bos(p->adev, &fpriv->vm,
&p->validated);
INIT_LIST_HEAD(&duplicates);
amdgpu_vm_get_pd_bo(&fpriv->vm, &p->validated, &p->vm_pd);
if (p->uf.bo)
list_add(&p->uf_entry.tv.head, &p->validated);
@ -415,20 +415,23 @@ static int amdgpu_cs_parser_relocs(struct amdgpu_cs_parser *p)
if (need_mmap_lock)
down_read(&current->mm->mmap_sem);
INIT_LIST_HEAD(&duplicates);
r = ttm_eu_reserve_buffers(&p->ticket, &p->validated, true, &duplicates);
if (unlikely(r != 0))
goto error_reserve;
r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &p->validated);
amdgpu_vm_get_pt_bos(&fpriv->vm, &duplicates);
r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &duplicates);
if (r)
goto error_validate;
r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &duplicates);
r = amdgpu_cs_list_validate(p->adev, &fpriv->vm, &p->validated);
error_validate:
if (r)
if (r) {
amdgpu_vm_move_pt_bos_in_lru(p->adev, &fpriv->vm);
ttm_eu_backoff_reservation(&p->ticket, &p->validated);
}
error_reserve:
if (need_mmap_lock)
@ -472,8 +475,11 @@ static int cmp_size_smaller_first(void *priv, struct list_head *a,
**/
static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bool backoff)
{
struct amdgpu_fpriv *fpriv = parser->filp->driver_priv;
unsigned i;
amdgpu_vm_move_pt_bos_in_lru(parser->adev, &fpriv->vm);
if (!error) {
/* Sort the buffer list from the smallest to largest buffer,
* which affects the order of buffers in the LRU list.
@ -501,7 +507,6 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser, int error, bo
if (parser->bo_list)
amdgpu_bo_list_put(parser->bo_list);
drm_free_large(parser->vm_bos);
for (i = 0; i < parser->nchunks; i++)
drm_free_large(parser->chunks[i].kdata);
kfree(parser->chunks);

View File

@ -25,7 +25,7 @@
#include <drm/drmP.h>
#include "amdgpu.h"
int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
int amdgpu_ctx_init(struct amdgpu_device *adev, enum amd_sched_priority pri,
struct amdgpu_ctx *ctx)
{
unsigned i, j;
@ -35,17 +35,25 @@ int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
ctx->adev = adev;
kref_init(&ctx->refcount);
spin_lock_init(&ctx->ring_lock);
for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
ctx->rings[i].sequence = 1;
ctx->fences = kzalloc(sizeof(struct fence *) * amdgpu_sched_jobs *
AMDGPU_MAX_RINGS, GFP_KERNEL);
if (!ctx->fences)
return -ENOMEM;
for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {
ctx->rings[i].sequence = 1;
ctx->rings[i].fences = (void *)ctx->fences + sizeof(struct fence *) *
amdgpu_sched_jobs * i;
}
if (amdgpu_enable_scheduler) {
/* create context entity for each ring */
for (i = 0; i < adev->num_rings; i++) {
struct amd_sched_rq *rq;
if (kernel)
rq = &adev->rings[i]->sched.kernel_rq;
else
rq = &adev->rings[i]->sched.sched_rq;
if (pri >= AMD_SCHED_MAX_PRIORITY) {
kfree(ctx->fences);
return -EINVAL;
}
rq = &adev->rings[i]->sched.sched_rq[pri];
r = amd_sched_entity_init(&adev->rings[i]->sched,
&ctx->rings[i].entity,
rq, amdgpu_sched_jobs);
@ -57,7 +65,7 @@ int amdgpu_ctx_init(struct amdgpu_device *adev, bool kernel,
for (j = 0; j < i; j++)
amd_sched_entity_fini(&adev->rings[j]->sched,
&ctx->rings[j].entity);
kfree(ctx);
kfree(ctx->fences);
return r;
}
}
@ -73,8 +81,9 @@ void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
return;
for (i = 0; i < AMDGPU_MAX_RINGS; ++i)
for (j = 0; j < AMDGPU_CTX_MAX_CS_PENDING; ++j)
for (j = 0; j < amdgpu_sched_jobs; ++j)
fence_put(ctx->rings[i].fences[j]);
kfree(ctx->fences);
if (amdgpu_enable_scheduler) {
for (i = 0; i < adev->num_rings; i++)
@ -103,9 +112,13 @@ static int amdgpu_ctx_alloc(struct amdgpu_device *adev,
return r;
}
*id = (uint32_t)r;
r = amdgpu_ctx_init(adev, false, ctx);
r = amdgpu_ctx_init(adev, AMD_SCHED_PRIORITY_NORMAL, ctx);
if (r) {
idr_remove(&mgr->ctx_handles, *id);
*id = 0;
kfree(ctx);
}
mutex_unlock(&mgr->lock);
return r;
}
@ -239,7 +252,7 @@ uint64_t amdgpu_ctx_add_fence(struct amdgpu_ctx *ctx, struct amdgpu_ring *ring,
unsigned idx = 0;
struct fence *other = NULL;
idx = seq % AMDGPU_CTX_MAX_CS_PENDING;
idx = seq & (amdgpu_sched_jobs - 1);
other = cring->fences[idx];
if (other) {
signed long r;
@ -274,12 +287,12 @@ struct fence *amdgpu_ctx_get_fence(struct amdgpu_ctx *ctx,
}
if (seq + AMDGPU_CTX_MAX_CS_PENDING < cring->sequence) {
if (seq + amdgpu_sched_jobs < cring->sequence) {
spin_unlock(&ctx->ring_lock);
return NULL;
}
fence = fence_get(cring->fences[seq % AMDGPU_CTX_MAX_CS_PENDING]);
fence = fence_get(cring->fences[seq & (amdgpu_sched_jobs - 1)]);
spin_unlock(&ctx->ring_lock);
return fence;

View File

@ -38,6 +38,7 @@
#include "amdgpu_i2c.h"
#include "atom.h"
#include "amdgpu_atombios.h"
#include "amd_pcie.h"
#ifdef CONFIG_DRM_AMDGPU_CIK
#include "cik.h"
#endif
@ -949,6 +950,15 @@ static bool amdgpu_check_pot_argument(int arg)
*/
static void amdgpu_check_arguments(struct amdgpu_device *adev)
{
if (amdgpu_sched_jobs < 4) {
dev_warn(adev->dev, "sched jobs (%d) must be at least 4\n",
amdgpu_sched_jobs);
amdgpu_sched_jobs = 4;
} else if (!amdgpu_check_pot_argument(amdgpu_sched_jobs)){
dev_warn(adev->dev, "sched jobs (%d) must be a power of 2\n",
amdgpu_sched_jobs);
amdgpu_sched_jobs = roundup_pow_of_two(amdgpu_sched_jobs);
}
/* vramlimit must be a power of two */
if (!amdgpu_check_pot_argument(amdgpu_vram_limit)) {
dev_warn(adev->dev, "vram limit (%d) must be a power of 2\n",
@ -1214,12 +1224,14 @@ static int amdgpu_early_init(struct amdgpu_device *adev)
} else {
if (adev->ip_blocks[i].funcs->early_init) {
r = adev->ip_blocks[i].funcs->early_init((void *)adev);
if (r == -ENOENT)
if (r == -ENOENT) {
adev->ip_block_status[i].valid = false;
else if (r)
} else if (r) {
DRM_ERROR("early_init %d failed %d\n", i, r);
return r;
else
} else {
adev->ip_block_status[i].valid = true;
}
} else {
adev->ip_block_status[i].valid = true;
}
@ -1237,20 +1249,28 @@ static int amdgpu_init(struct amdgpu_device *adev)
if (!adev->ip_block_status[i].valid)
continue;
r = adev->ip_blocks[i].funcs->sw_init((void *)adev);
if (r)
if (r) {
DRM_ERROR("sw_init %d failed %d\n", i, r);
return r;
}
adev->ip_block_status[i].sw = true;
/* need to do gmc hw init early so we can allocate gpu mem */
if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC) {
r = amdgpu_vram_scratch_init(adev);
if (r)
if (r) {
DRM_ERROR("amdgpu_vram_scratch_init failed %d\n", r);
return r;
}
r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
if (r)
if (r) {
DRM_ERROR("hw_init %d failed %d\n", i, r);
return r;
}
r = amdgpu_wb_init(adev);
if (r)
if (r) {
DRM_ERROR("amdgpu_wb_init failed %d\n", r);
return r;
}
adev->ip_block_status[i].hw = true;
}
}
@ -1262,8 +1282,10 @@ static int amdgpu_init(struct amdgpu_device *adev)
if (adev->ip_blocks[i].type == AMD_IP_BLOCK_TYPE_GMC)
continue;
r = adev->ip_blocks[i].funcs->hw_init((void *)adev);
if (r)
if (r) {
DRM_ERROR("hw_init %d failed %d\n", i, r);
return r;
}
adev->ip_block_status[i].hw = true;
}
@ -1280,12 +1302,16 @@ static int amdgpu_late_init(struct amdgpu_device *adev)
/* enable clockgating to save power */
r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
AMD_CG_STATE_GATE);
if (r)
if (r) {
DRM_ERROR("set_clockgating_state(gate) %d failed %d\n", i, r);
return r;
}
if (adev->ip_blocks[i].funcs->late_init) {
r = adev->ip_blocks[i].funcs->late_init((void *)adev);
if (r)
if (r) {
DRM_ERROR("late_init %d failed %d\n", i, r);
return r;
}
}
}
@ -1306,10 +1332,15 @@ static int amdgpu_fini(struct amdgpu_device *adev)
/* ungate blocks before hw fini so that we can shutdown the blocks safely */
r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
AMD_CG_STATE_UNGATE);
if (r)
if (r) {
DRM_ERROR("set_clockgating_state(ungate) %d failed %d\n", i, r);
return r;
}
r = adev->ip_blocks[i].funcs->hw_fini((void *)adev);
/* XXX handle errors */
if (r) {
DRM_DEBUG("hw_fini %d failed %d\n", i, r);
}
adev->ip_block_status[i].hw = false;
}
@ -1318,6 +1349,9 @@ static int amdgpu_fini(struct amdgpu_device *adev)
continue;
r = adev->ip_blocks[i].funcs->sw_fini((void *)adev);
/* XXX handle errors */
if (r) {
DRM_DEBUG("sw_fini %d failed %d\n", i, r);
}
adev->ip_block_status[i].sw = false;
adev->ip_block_status[i].valid = false;
}
@ -1335,9 +1369,15 @@ static int amdgpu_suspend(struct amdgpu_device *adev)
/* ungate blocks so that suspend can properly shut them down */
r = adev->ip_blocks[i].funcs->set_clockgating_state((void *)adev,
AMD_CG_STATE_UNGATE);
if (r) {
DRM_ERROR("set_clockgating_state(ungate) %d failed %d\n", i, r);
}
/* XXX handle errors */
r = adev->ip_blocks[i].funcs->suspend(adev);
/* XXX handle errors */
if (r) {
DRM_ERROR("suspend %d failed %d\n", i, r);
}
}
return 0;
@ -1351,8 +1391,10 @@ static int amdgpu_resume(struct amdgpu_device *adev)
if (!adev->ip_block_status[i].valid)
continue;
r = adev->ip_blocks[i].funcs->resume(adev);
if (r)
if (r) {
DRM_ERROR("resume %d failed %d\n", i, r);
return r;
}
}
return 0;
@ -1484,8 +1526,10 @@ int amdgpu_device_init(struct amdgpu_device *adev,
return -EINVAL;
}
r = amdgpu_atombios_init(adev);
if (r)
if (r) {
dev_err(adev->dev, "amdgpu_atombios_init failed\n");
return r;
}
/* Post card if necessary */
if (!amdgpu_card_posted(adev)) {
@ -1499,21 +1543,26 @@ int amdgpu_device_init(struct amdgpu_device *adev,
/* Initialize clocks */
r = amdgpu_atombios_get_clock_info(adev);
if (r)
if (r) {
dev_err(adev->dev, "amdgpu_atombios_get_clock_info failed\n");
return r;
}
/* init i2c buses */
amdgpu_atombios_i2c_init(adev);
/* Fence driver */
r = amdgpu_fence_driver_init(adev);
if (r)
if (r) {
dev_err(adev->dev, "amdgpu_fence_driver_init failed\n");
return r;
}
/* init the mode config */
drm_mode_config_init(adev->ddev);
r = amdgpu_init(adev);
if (r) {
dev_err(adev->dev, "amdgpu_init failed\n");
amdgpu_fini(adev);
return r;
}
@ -1528,7 +1577,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
return r;
}
r = amdgpu_ctx_init(adev, true, &adev->kernel_ctx);
r = amdgpu_ctx_init(adev, AMD_SCHED_PRIORITY_KERNEL, &adev->kernel_ctx);
if (r) {
dev_err(adev->dev, "failed to create kernel context (%d).\n", r);
return r;
@ -1570,8 +1619,10 @@ int amdgpu_device_init(struct amdgpu_device *adev,
* explicit gating rather than handling it automatically.
*/
r = amdgpu_late_init(adev);
if (r)
if (r) {
dev_err(adev->dev, "amdgpu_late_init failed\n");
return r;
}
return 0;
}
@ -1788,6 +1839,7 @@ int amdgpu_resume_kms(struct drm_device *dev, bool resume, bool fbcon)
}
drm_kms_helper_poll_enable(dev);
drm_helper_hpd_irq_event(dev);
if (fbcon) {
amdgpu_fbdev_set_suspend(adev, 0);
@ -1881,6 +1933,83 @@ retry:
return r;
}
void amdgpu_get_pcie_info(struct amdgpu_device *adev)
{
u32 mask;
int ret;
if (pci_is_root_bus(adev->pdev->bus))
return;
if (amdgpu_pcie_gen2 == 0)
return;
if (adev->flags & AMD_IS_APU)
return;
ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
if (!ret) {
adev->pm.pcie_gen_mask = (CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1 |
CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2 |
CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3);
if (mask & DRM_PCIE_SPEED_25)
adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1;
if (mask & DRM_PCIE_SPEED_50)
adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2;
if (mask & DRM_PCIE_SPEED_80)
adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3;
}
ret = drm_pcie_get_max_link_width(adev->ddev, &mask);
if (!ret) {
switch (mask) {
case 32:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X32 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 16:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 12:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 8:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 4:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 2:
adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
break;
case 1:
adev->pm.pcie_mlw_mask = CAIL_PCIE_LINK_WIDTH_SUPPORT_X1;
break;
default:
break;
}
}
}
/*
* Debugfs

View File

@ -79,9 +79,10 @@ int amdgpu_vm_fault_stop = 0;
int amdgpu_vm_debug = 0;
int amdgpu_exp_hw_support = 0;
int amdgpu_enable_scheduler = 1;
int amdgpu_sched_jobs = 16;
int amdgpu_sched_jobs = 32;
int amdgpu_sched_hw_submission = 2;
int amdgpu_enable_semaphores = 0;
int amdgpu_powerplay = -1;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@ -155,7 +156,7 @@ module_param_named(exp_hw_support, amdgpu_exp_hw_support, int, 0444);
MODULE_PARM_DESC(enable_scheduler, "enable SW GPU scheduler (1 = enable (default), 0 = disable)");
module_param_named(enable_scheduler, amdgpu_enable_scheduler, int, 0444);
MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 16)");
MODULE_PARM_DESC(sched_jobs, "the max number of jobs supported in the sw queue (default 32)");
module_param_named(sched_jobs, amdgpu_sched_jobs, int, 0444);
MODULE_PARM_DESC(sched_hw_submission, "the max number of HW submissions (default 2)");
@ -164,6 +165,11 @@ module_param_named(sched_hw_submission, amdgpu_sched_hw_submission, int, 0444);
MODULE_PARM_DESC(enable_semaphores, "Enable semaphores (1 = enable, 0 = disable (default))");
module_param_named(enable_semaphores, amdgpu_enable_semaphores, int, 0644);
#ifdef CONFIG_DRM_AMD_POWERPLAY
MODULE_PARM_DESC(powerplay, "Powerplay component (1 = enable, 0 = disable, -1 = auto (default))");
module_param_named(powerplay, amdgpu_powerplay, int, 0444);
#endif
static struct pci_device_id pciidlist[] = {
#ifdef CONFIG_DRM_AMDGPU_CIK
/* Kaveri */

View File

@ -263,7 +263,7 @@ out_unref:
}
if (fb && ret) {
drm_gem_object_unreference(gobj);
drm_gem_object_unreference_unlocked(gobj);
drm_framebuffer_unregister_private(fb);
drm_framebuffer_cleanup(fb);
kfree(fb);

View File

@ -448,7 +448,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
struct amdgpu_bo_va *bo_va, uint32_t operation)
{
struct ttm_validate_buffer tv, *entry;
struct amdgpu_bo_list_entry *vm_bos;
struct amdgpu_bo_list_entry vm_pd;
struct ww_acquire_ctx ticket;
struct list_head list, duplicates;
unsigned domain;
@ -461,15 +461,14 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
tv.shared = true;
list_add(&tv.head, &list);
vm_bos = amdgpu_vm_get_bos(adev, bo_va->vm, &list);
if (!vm_bos)
return;
amdgpu_vm_get_pd_bo(bo_va->vm, &list, &vm_pd);
/* Provide duplicates to avoid -EALREADY */
r = ttm_eu_reserve_buffers(&ticket, &list, true, &duplicates);
if (r)
goto error_free;
goto error_print;
amdgpu_vm_get_pt_bos(bo_va->vm, &duplicates);
list_for_each_entry(entry, &list, head) {
domain = amdgpu_mem_type_to_domain(entry->bo->mem.mem_type);
/* if anything is swapped out don't swap it in here,
@ -499,9 +498,7 @@ static void amdgpu_gem_va_update_vm(struct amdgpu_device *adev,
error_unreserve:
ttm_eu_backoff_reservation(&ticket, &list);
error_free:
drm_free_large(vm_bos);
error_print:
if (r && r != -ERESTARTSYS)
DRM_ERROR("Couldn't update BO_VA (%d)\n", r);
}

View File

@ -25,6 +25,7 @@
* Alex Deucher
* Jerome Glisse
*/
#include <linux/irq.h>
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <drm/amdgpu_drm.h>
@ -312,6 +313,7 @@ int amdgpu_irq_add_id(struct amdgpu_device *adev, unsigned src_id,
}
adev->irq.sources[src_id] = source;
return 0;
}
@ -335,15 +337,19 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
return;
}
src = adev->irq.sources[src_id];
if (!src) {
DRM_DEBUG("Unhandled interrupt src_id: %d\n", src_id);
return;
}
if (adev->irq.virq[src_id]) {
generic_handle_irq(irq_find_mapping(adev->irq.domain, src_id));
} else {
src = adev->irq.sources[src_id];
if (!src) {
DRM_DEBUG("Unhandled interrupt src_id: %d\n", src_id);
return;
}
r = src->funcs->process(adev, src, entry);
if (r)
DRM_ERROR("error processing interrupt (%d)\n", r);
r = src->funcs->process(adev, src, entry);
if (r)
DRM_ERROR("error processing interrupt (%d)\n", r);
}
}
/**
@ -461,3 +467,90 @@ bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
return !!atomic_read(&src->enabled_types[type]);
}
/* gen irq */
static void amdgpu_irq_mask(struct irq_data *irqd)
{
/* XXX */
}
static void amdgpu_irq_unmask(struct irq_data *irqd)
{
/* XXX */
}
static struct irq_chip amdgpu_irq_chip = {
.name = "amdgpu-ih",
.irq_mask = amdgpu_irq_mask,
.irq_unmask = amdgpu_irq_unmask,
};
static int amdgpu_irqdomain_map(struct irq_domain *d,
unsigned int irq, irq_hw_number_t hwirq)
{
if (hwirq >= AMDGPU_MAX_IRQ_SRC_ID)
return -EPERM;
irq_set_chip_and_handler(irq,
&amdgpu_irq_chip, handle_simple_irq);
return 0;
}
static struct irq_domain_ops amdgpu_hw_irqdomain_ops = {
.map = amdgpu_irqdomain_map,
};
/**
* amdgpu_irq_add_domain - create a linear irq domain
*
* @adev: amdgpu device pointer
*
* Create an irq domain for GPU interrupt sources
* that may be driven by another driver (e.g., ACP).
*/
int amdgpu_irq_add_domain(struct amdgpu_device *adev)
{
adev->irq.domain = irq_domain_add_linear(NULL, AMDGPU_MAX_IRQ_SRC_ID,
&amdgpu_hw_irqdomain_ops, adev);
if (!adev->irq.domain) {
DRM_ERROR("GPU irq add domain failed\n");
return -ENODEV;
}
return 0;
}
/**
* amdgpu_irq_remove_domain - remove the irq domain
*
* @adev: amdgpu device pointer
*
* Remove the irq domain for GPU interrupt sources
* that may be driven by another driver (e.g., ACP).
*/
void amdgpu_irq_remove_domain(struct amdgpu_device *adev)
{
if (adev->irq.domain) {
irq_domain_remove(adev->irq.domain);
adev->irq.domain = NULL;
}
}
/**
* amdgpu_irq_create_mapping - create a mapping between a domain irq and a
* Linux irq
*
* @adev: amdgpu device pointer
* @src_id: IH source id
*
* Create a mapping between a domain irq (GPU IH src id) and a Linux irq
* Use this for components that generate a GPU interrupt, but are driven
* by a different driver (e.g., ACP).
* Returns the Linux irq.
*/
unsigned amdgpu_irq_create_mapping(struct amdgpu_device *adev, unsigned src_id)
{
adev->irq.virq[src_id] = irq_create_mapping(adev->irq.domain, src_id);
return adev->irq.virq[src_id];
}

View File

@ -24,6 +24,7 @@
#ifndef __AMDGPU_IRQ_H__
#define __AMDGPU_IRQ_H__
#include <linux/irqdomain.h>
#include "amdgpu_ih.h"
#define AMDGPU_MAX_IRQ_SRC_ID 0x100
@ -65,6 +66,10 @@ struct amdgpu_irq {
/* interrupt ring */
struct amdgpu_ih_ring ih;
const struct amdgpu_ih_funcs *ih_funcs;
/* gen irq stuff */
struct irq_domain *domain; /* GPU irq controller domain */
unsigned virq[AMDGPU_MAX_IRQ_SRC_ID];
};
void amdgpu_irq_preinstall(struct drm_device *dev);
@ -90,4 +95,8 @@ int amdgpu_irq_put(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
bool amdgpu_irq_enabled(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
unsigned type);
int amdgpu_irq_add_domain(struct amdgpu_device *adev);
void amdgpu_irq_remove_domain(struct amdgpu_device *adev);
unsigned amdgpu_irq_create_mapping(struct amdgpu_device *adev, unsigned src_id);
#endif

View File

@ -35,6 +35,7 @@
#include <drm/drm_dp_helper.h>
#include <drm/drm_fixed.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_plane_helper.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>

View File

@ -96,6 +96,7 @@ static inline void amdgpu_bo_unreserve(struct amdgpu_bo *bo)
*/
static inline u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo)
{
WARN_ON_ONCE(bo->tbo.mem.mem_type == TTM_PL_SYSTEM);
return bo->tbo.offset;
}

View File

@ -30,10 +30,16 @@
#include <linux/hwmon.h>
#include <linux/hwmon-sysfs.h>
#include "amd_powerplay.h"
static int amdgpu_debugfs_pm_init(struct amdgpu_device *adev);
void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
{
if (adev->pp_enabled)
/* TODO */
return;
if (adev->pm.dpm_enabled) {
mutex_lock(&adev->pm.mutex);
if (power_supply_is_system_supplied() > 0)
@ -52,7 +58,12 @@ static ssize_t amdgpu_get_dpm_state(struct device *dev,
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
enum amdgpu_pm_state_type pm = adev->pm.dpm.user_state;
enum amd_pm_state_type pm;
if (adev->pp_enabled) {
pm = amdgpu_dpm_get_current_power_state(adev);
} else
pm = adev->pm.dpm.user_state;
return snprintf(buf, PAGE_SIZE, "%s\n",
(pm == POWER_STATE_TYPE_BATTERY) ? "battery" :
@ -66,40 +77,57 @@ static ssize_t amdgpu_set_dpm_state(struct device *dev,
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
enum amd_pm_state_type state;
mutex_lock(&adev->pm.mutex);
if (strncmp("battery", buf, strlen("battery")) == 0)
adev->pm.dpm.user_state = POWER_STATE_TYPE_BATTERY;
state = POWER_STATE_TYPE_BATTERY;
else if (strncmp("balanced", buf, strlen("balanced")) == 0)
adev->pm.dpm.user_state = POWER_STATE_TYPE_BALANCED;
state = POWER_STATE_TYPE_BALANCED;
else if (strncmp("performance", buf, strlen("performance")) == 0)
adev->pm.dpm.user_state = POWER_STATE_TYPE_PERFORMANCE;
state = POWER_STATE_TYPE_PERFORMANCE;
else {
mutex_unlock(&adev->pm.mutex);
count = -EINVAL;
goto fail;
}
mutex_unlock(&adev->pm.mutex);
/* Can't set dpm state when the card is off */
if (!(adev->flags & AMD_IS_PX) ||
(ddev->switch_power_state == DRM_SWITCH_POWER_ON))
amdgpu_pm_compute_clocks(adev);
if (adev->pp_enabled) {
amdgpu_dpm_dispatch_task(adev, AMD_PP_EVENT_ENABLE_USER_STATE, &state, NULL);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.user_state = state;
mutex_unlock(&adev->pm.mutex);
/* Can't set dpm state when the card is off */
if (!(adev->flags & AMD_IS_PX) ||
(ddev->switch_power_state == DRM_SWITCH_POWER_ON))
amdgpu_pm_compute_clocks(adev);
}
fail:
return count;
}
static ssize_t amdgpu_get_dpm_forced_performance_level(struct device *dev,
struct device_attribute *attr,
char *buf)
struct device_attribute *attr,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
enum amdgpu_dpm_forced_level level = adev->pm.dpm.forced_level;
return snprintf(buf, PAGE_SIZE, "%s\n",
(level == AMDGPU_DPM_FORCED_LEVEL_AUTO) ? "auto" :
(level == AMDGPU_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
if (adev->pp_enabled) {
enum amd_dpm_forced_level level;
level = amdgpu_dpm_get_performance_level(adev);
return snprintf(buf, PAGE_SIZE, "%s\n",
(level == AMD_DPM_FORCED_LEVEL_AUTO) ? "auto" :
(level == AMD_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
} else {
enum amdgpu_dpm_forced_level level;
level = adev->pm.dpm.forced_level;
return snprintf(buf, PAGE_SIZE, "%s\n",
(level == AMDGPU_DPM_FORCED_LEVEL_AUTO) ? "auto" :
(level == AMDGPU_DPM_FORCED_LEVEL_LOW) ? "low" : "high");
}
}
static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
@ -112,7 +140,6 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
enum amdgpu_dpm_forced_level level;
int ret = 0;
mutex_lock(&adev->pm.mutex);
if (strncmp("low", buf, strlen("low")) == 0) {
level = AMDGPU_DPM_FORCED_LEVEL_LOW;
} else if (strncmp("high", buf, strlen("high")) == 0) {
@ -123,7 +150,11 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
count = -EINVAL;
goto fail;
}
if (adev->pm.funcs->force_performance_level) {
if (adev->pp_enabled)
amdgpu_dpm_force_performance_level(adev, level);
else {
mutex_lock(&adev->pm.mutex);
if (adev->pm.dpm.thermal_active) {
count = -EINVAL;
goto fail;
@ -131,6 +162,9 @@ static ssize_t amdgpu_set_dpm_forced_performance_level(struct device *dev,
ret = amdgpu_dpm_force_performance_level(adev, level);
if (ret)
count = -EINVAL;
else
adev->pm.dpm.forced_level = level;
mutex_unlock(&adev->pm.mutex);
}
fail:
mutex_unlock(&adev->pm.mutex);
@ -150,10 +184,10 @@ static ssize_t amdgpu_hwmon_show_temp(struct device *dev,
struct amdgpu_device *adev = dev_get_drvdata(dev);
int temp;
if (adev->pm.funcs->get_temperature)
temp = amdgpu_dpm_get_temperature(adev);
else
if (!adev->pp_enabled && !adev->pm.funcs->get_temperature)
temp = 0;
else
temp = amdgpu_dpm_get_temperature(adev);
return snprintf(buf, PAGE_SIZE, "%d\n", temp);
}
@ -181,8 +215,10 @@ static ssize_t amdgpu_hwmon_get_pwm1_enable(struct device *dev,
struct amdgpu_device *adev = dev_get_drvdata(dev);
u32 pwm_mode = 0;
if (adev->pm.funcs->get_fan_control_mode)
pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
if (!adev->pp_enabled && !adev->pm.funcs->get_fan_control_mode)
return -EINVAL;
pwm_mode = amdgpu_dpm_get_fan_control_mode(adev);
/* never 0 (full-speed), fuse or smc-controlled always */
return sprintf(buf, "%i\n", pwm_mode == FDO_PWM_MODE_STATIC ? 1 : 2);
@ -197,7 +233,7 @@ static ssize_t amdgpu_hwmon_set_pwm1_enable(struct device *dev,
int err;
int value;
if(!adev->pm.funcs->set_fan_control_mode)
if (!adev->pp_enabled && !adev->pm.funcs->set_fan_control_mode)
return -EINVAL;
err = kstrtoint(buf, 10, &value);
@ -290,11 +326,11 @@ static struct attribute *hwmon_attributes[] = {
static umode_t hwmon_attributes_visible(struct kobject *kobj,
struct attribute *attr, int index)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct device *dev = kobj_to_dev(kobj);
struct amdgpu_device *adev = dev_get_drvdata(dev);
umode_t effective_mode = attr->mode;
/* Skip attributes if DPM is not enabled */
/* Skip limit attributes if DPM is not enabled */
if (!adev->pm.dpm_enabled &&
(attr == &sensor_dev_attr_temp1_crit.dev_attr.attr ||
attr == &sensor_dev_attr_temp1_crit_hyst.dev_attr.attr ||
@ -304,6 +340,9 @@ static umode_t hwmon_attributes_visible(struct kobject *kobj,
attr == &sensor_dev_attr_pwm1_min.dev_attr.attr))
return 0;
if (adev->pp_enabled)
return effective_mode;
/* Skip fan attributes if fan is not present */
if (adev->pm.no_fan &&
(attr == &sensor_dev_attr_pwm1.dev_attr.attr ||
@ -351,7 +390,7 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
container_of(work, struct amdgpu_device,
pm.dpm.thermal.work);
/* switch to the thermal state */
enum amdgpu_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
if (!adev->pm.dpm_enabled)
return;
@ -379,7 +418,7 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
}
static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev,
enum amdgpu_pm_state_type dpm_state)
enum amd_pm_state_type dpm_state)
{
int i;
struct amdgpu_ps *ps;
@ -516,7 +555,7 @@ static void amdgpu_dpm_change_power_state_locked(struct amdgpu_device *adev)
{
int i;
struct amdgpu_ps *ps;
enum amdgpu_pm_state_type dpm_state;
enum amd_pm_state_type dpm_state;
int ret;
/* if dpm init failed */
@ -635,49 +674,54 @@ done:
void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable)
{
if (adev->pm.funcs->powergate_uvd) {
mutex_lock(&adev->pm.mutex);
/* enable/disable UVD */
if (adev->pp_enabled)
amdgpu_dpm_powergate_uvd(adev, !enable);
mutex_unlock(&adev->pm.mutex);
} else {
if (enable) {
else {
if (adev->pm.funcs->powergate_uvd) {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.uvd_active = true;
adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
/* enable/disable UVD */
amdgpu_dpm_powergate_uvd(adev, !enable);
mutex_unlock(&adev->pm.mutex);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.uvd_active = false;
mutex_unlock(&adev->pm.mutex);
if (enable) {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.uvd_active = true;
adev->pm.dpm.state = POWER_STATE_TYPE_INTERNAL_UVD;
mutex_unlock(&adev->pm.mutex);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.uvd_active = false;
mutex_unlock(&adev->pm.mutex);
}
amdgpu_pm_compute_clocks(adev);
}
amdgpu_pm_compute_clocks(adev);
}
}
void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable)
{
if (adev->pm.funcs->powergate_vce) {
mutex_lock(&adev->pm.mutex);
/* enable/disable VCE */
if (adev->pp_enabled)
amdgpu_dpm_powergate_vce(adev, !enable);
mutex_unlock(&adev->pm.mutex);
} else {
if (enable) {
else {
if (adev->pm.funcs->powergate_vce) {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.vce_active = true;
/* XXX select vce level based on ring/task */
adev->pm.dpm.vce_level = AMDGPU_VCE_LEVEL_AC_ALL;
amdgpu_dpm_powergate_vce(adev, !enable);
mutex_unlock(&adev->pm.mutex);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.vce_active = false;
mutex_unlock(&adev->pm.mutex);
if (enable) {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.vce_active = true;
/* XXX select vce level based on ring/task */
adev->pm.dpm.vce_level = AMDGPU_VCE_LEVEL_AC_ALL;
mutex_unlock(&adev->pm.mutex);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.vce_active = false;
mutex_unlock(&adev->pm.mutex);
}
amdgpu_pm_compute_clocks(adev);
}
amdgpu_pm_compute_clocks(adev);
}
}
@ -685,10 +729,13 @@ void amdgpu_pm_print_power_states(struct amdgpu_device *adev)
{
int i;
for (i = 0; i < adev->pm.dpm.num_ps; i++) {
printk("== power state %d ==\n", i);
if (adev->pp_enabled)
/* TO DO */
return;
for (i = 0; i < adev->pm.dpm.num_ps; i++)
amdgpu_dpm_print_power_state(adev, &adev->pm.dpm.ps[i]);
}
}
int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
@ -698,8 +745,11 @@ int amdgpu_pm_sysfs_init(struct amdgpu_device *adev)
if (adev->pm.sysfs_initialized)
return 0;
if (adev->pm.funcs->get_temperature == NULL)
return 0;
if (!adev->pp_enabled) {
if (adev->pm.funcs->get_temperature == NULL)
return 0;
}
adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev,
DRIVER_NAME, adev,
hwmon_groups);
@ -748,32 +798,43 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
if (!adev->pm.dpm_enabled)
return;
mutex_lock(&adev->pm.mutex);
if (adev->pp_enabled) {
int i = 0;
/* update active crtc counts */
adev->pm.dpm.new_active_crtcs = 0;
adev->pm.dpm.new_active_crtc_count = 0;
if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
list_for_each_entry(crtc,
&ddev->mode_config.crtc_list, head) {
amdgpu_crtc = to_amdgpu_crtc(crtc);
if (crtc->enabled) {
adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
adev->pm.dpm.new_active_crtc_count++;
amdgpu_display_bandwidth_update(adev);
mutex_lock(&adev->ring_lock);
for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
struct amdgpu_ring *ring = adev->rings[i];
if (ring && ring->ready)
amdgpu_fence_wait_empty(ring);
}
mutex_unlock(&adev->ring_lock);
amdgpu_dpm_dispatch_task(adev, AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE, NULL, NULL);
} else {
mutex_lock(&adev->pm.mutex);
adev->pm.dpm.new_active_crtcs = 0;
adev->pm.dpm.new_active_crtc_count = 0;
if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
list_for_each_entry(crtc,
&ddev->mode_config.crtc_list, head) {
amdgpu_crtc = to_amdgpu_crtc(crtc);
if (crtc->enabled) {
adev->pm.dpm.new_active_crtcs |= (1 << amdgpu_crtc->crtc_id);
adev->pm.dpm.new_active_crtc_count++;
}
}
}
/* update battery/ac status */
if (power_supply_is_system_supplied() > 0)
adev->pm.dpm.ac_power = true;
else
adev->pm.dpm.ac_power = false;
amdgpu_dpm_change_power_state_locked(adev);
mutex_unlock(&adev->pm.mutex);
}
/* update battery/ac status */
if (power_supply_is_system_supplied() > 0)
adev->pm.dpm.ac_power = true;
else
adev->pm.dpm.ac_power = false;
amdgpu_dpm_change_power_state_locked(adev);
mutex_unlock(&adev->pm.mutex);
}
/*
@ -787,7 +848,13 @@ static int amdgpu_debugfs_pm_info(struct seq_file *m, void *data)
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
if (adev->pm.dpm_enabled) {
if (!adev->pm.dpm_enabled) {
seq_printf(m, "dpm not enabled\n");
return 0;
}
if (adev->pp_enabled) {
amdgpu_dpm_debugfs_print_current_performance_level(adev, m);
} else {
mutex_lock(&adev->pm.mutex);
if (adev->pm.funcs->debugfs_print_current_performance_level)
amdgpu_dpm_debugfs_print_current_performance_level(adev, m);

View File

@ -0,0 +1,317 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: AMD
*
*/
#include "atom.h"
#include "amdgpu.h"
#include "amd_shared.h"
#include <linux/module.h>
#include <linux/moduleparam.h>
#include "amdgpu_pm.h"
#include <drm/amdgpu_drm.h>
#include "amdgpu_powerplay.h"
#include "cik_dpm.h"
#include "vi_dpm.h"
static int amdgpu_powerplay_init(struct amdgpu_device *adev)
{
int ret = 0;
struct amd_powerplay *amd_pp;
amd_pp = &(adev->powerplay);
if (adev->pp_enabled) {
#ifdef CONFIG_DRM_AMD_POWERPLAY
struct amd_pp_init *pp_init;
pp_init = kzalloc(sizeof(struct amd_pp_init), GFP_KERNEL);
if (pp_init == NULL)
return -ENOMEM;
pp_init->chip_family = adev->family;
pp_init->chip_id = adev->asic_type;
pp_init->device = amdgpu_cgs_create_device(adev);
ret = amd_powerplay_init(pp_init, amd_pp);
kfree(pp_init);
#endif
} else {
amd_pp->pp_handle = (void *)adev;
switch (adev->asic_type) {
#ifdef CONFIG_DRM_AMDGPU_CIK
case CHIP_BONAIRE:
case CHIP_HAWAII:
amd_pp->ip_funcs = &ci_dpm_ip_funcs;
break;
case CHIP_KABINI:
case CHIP_MULLINS:
case CHIP_KAVERI:
amd_pp->ip_funcs = &kv_dpm_ip_funcs;
break;
#endif
case CHIP_TOPAZ:
amd_pp->ip_funcs = &iceland_dpm_ip_funcs;
break;
case CHIP_TONGA:
amd_pp->ip_funcs = &tonga_dpm_ip_funcs;
break;
case CHIP_FIJI:
amd_pp->ip_funcs = &fiji_dpm_ip_funcs;
break;
case CHIP_CARRIZO:
case CHIP_STONEY:
amd_pp->ip_funcs = &cz_dpm_ip_funcs;
break;
default:
ret = -EINVAL;
break;
}
}
return ret;
}
static int amdgpu_pp_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int ret = 0;
#ifdef CONFIG_DRM_AMD_POWERPLAY
switch (adev->asic_type) {
case CHIP_TONGA:
case CHIP_FIJI:
adev->pp_enabled = (amdgpu_powerplay > 0) ? true : false;
break;
default:
adev->pp_enabled = (amdgpu_powerplay > 0) ? true : false;
break;
}
#else
adev->pp_enabled = false;
#endif
ret = amdgpu_powerplay_init(adev);
if (ret)
return ret;
if (adev->powerplay.ip_funcs->early_init)
ret = adev->powerplay.ip_funcs->early_init(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_late_init(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->late_init)
ret = adev->powerplay.ip_funcs->late_init(
adev->powerplay.pp_handle);
#ifdef CONFIG_DRM_AMD_POWERPLAY
if (adev->pp_enabled)
amdgpu_pm_sysfs_init(adev);
#endif
return ret;
}
static int amdgpu_pp_sw_init(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->sw_init)
ret = adev->powerplay.ip_funcs->sw_init(
adev->powerplay.pp_handle);
#ifdef CONFIG_DRM_AMD_POWERPLAY
if (adev->pp_enabled) {
if (amdgpu_dpm == 0)
adev->pm.dpm_enabled = false;
else
adev->pm.dpm_enabled = true;
}
#endif
return ret;
}
static int amdgpu_pp_sw_fini(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->sw_fini)
ret = adev->powerplay.ip_funcs->sw_fini(
adev->powerplay.pp_handle);
if (ret)
return ret;
#ifdef CONFIG_DRM_AMD_POWERPLAY
if (adev->pp_enabled) {
amdgpu_pm_sysfs_fini(adev);
amd_powerplay_fini(adev->powerplay.pp_handle);
}
#endif
return ret;
}
static int amdgpu_pp_hw_init(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->pp_enabled && adev->firmware.smu_load)
amdgpu_ucode_init_bo(adev);
if (adev->powerplay.ip_funcs->hw_init)
ret = adev->powerplay.ip_funcs->hw_init(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_hw_fini(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->hw_fini)
ret = adev->powerplay.ip_funcs->hw_fini(
adev->powerplay.pp_handle);
if (adev->pp_enabled && adev->firmware.smu_load)
amdgpu_ucode_fini_bo(adev);
return ret;
}
static int amdgpu_pp_suspend(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->suspend)
ret = adev->powerplay.ip_funcs->suspend(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_resume(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->resume)
ret = adev->powerplay.ip_funcs->resume(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->set_clockgating_state)
ret = adev->powerplay.ip_funcs->set_clockgating_state(
adev->powerplay.pp_handle, state);
return ret;
}
static int amdgpu_pp_set_powergating_state(void *handle,
enum amd_powergating_state state)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->set_powergating_state)
ret = adev->powerplay.ip_funcs->set_powergating_state(
adev->powerplay.pp_handle, state);
return ret;
}
static bool amdgpu_pp_is_idle(void *handle)
{
bool ret = true;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->is_idle)
ret = adev->powerplay.ip_funcs->is_idle(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_wait_for_idle(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->wait_for_idle)
ret = adev->powerplay.ip_funcs->wait_for_idle(
adev->powerplay.pp_handle);
return ret;
}
static int amdgpu_pp_soft_reset(void *handle)
{
int ret = 0;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->soft_reset)
ret = adev->powerplay.ip_funcs->soft_reset(
adev->powerplay.pp_handle);
return ret;
}
static void amdgpu_pp_print_status(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->powerplay.ip_funcs->print_status)
adev->powerplay.ip_funcs->print_status(
adev->powerplay.pp_handle);
}
const struct amd_ip_funcs amdgpu_pp_ip_funcs = {
.early_init = amdgpu_pp_early_init,
.late_init = amdgpu_pp_late_init,
.sw_init = amdgpu_pp_sw_init,
.sw_fini = amdgpu_pp_sw_fini,
.hw_init = amdgpu_pp_hw_init,
.hw_fini = amdgpu_pp_hw_fini,
.suspend = amdgpu_pp_suspend,
.resume = amdgpu_pp_resume,
.is_idle = amdgpu_pp_is_idle,
.wait_for_idle = amdgpu_pp_wait_for_idle,
.soft_reset = amdgpu_pp_soft_reset,
.print_status = amdgpu_pp_print_status,
.set_clockgating_state = amdgpu_pp_set_clockgating_state,
.set_powergating_state = amdgpu_pp_set_powergating_state,
};

View File

@ -0,0 +1,33 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: AMD
*
*/
#ifndef __AMDGPU_POPWERPLAY_H__
#define __AMDGPU_POPWERPLAY_H__
#include "amd_shared.h"
extern const struct amd_ip_funcs amdgpu_pp_ip_funcs;
#endif /* __AMDSOC_DM_H__ */

View File

@ -293,7 +293,8 @@ int amdgpu_sync_rings(struct amdgpu_sync *sync,
fence = to_amdgpu_fence(sync->sync_to[i]);
/* check if we really need to sync */
if (!amdgpu_fence_need_sync(fence, ring))
if (!amdgpu_enable_scheduler &&
!amdgpu_fence_need_sync(fence, ring))
continue;
/* prevent GPU deadlocks */
@ -303,7 +304,7 @@ int amdgpu_sync_rings(struct amdgpu_sync *sync,
}
if (amdgpu_enable_scheduler || !amdgpu_enable_semaphores) {
r = fence_wait(&fence->base, true);
r = fence_wait(sync->sync_to[i], true);
if (r)
return r;
continue;

View File

@ -75,50 +75,77 @@ static unsigned amdgpu_vm_directory_size(struct amdgpu_device *adev)
}
/**
* amdgpu_vm_get_bos - add the vm BOs to a validation list
* amdgpu_vm_get_pd_bo - add the VM PD to a validation list
*
* @vm: vm providing the BOs
* @head: head of validation list
* @validated: head of validation list
* @entry: entry to add
*
* Add the page directory to the list of BOs to
* validate for command submission (cayman+).
* validate for command submission.
*/
struct amdgpu_bo_list_entry *amdgpu_vm_get_bos(struct amdgpu_device *adev,
struct amdgpu_vm *vm,
struct list_head *head)
void amdgpu_vm_get_pd_bo(struct amdgpu_vm *vm,
struct list_head *validated,
struct amdgpu_bo_list_entry *entry)
{
struct amdgpu_bo_list_entry *list;
unsigned i, idx;
entry->robj = vm->page_directory;
entry->prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
entry->allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
entry->priority = 0;
entry->tv.bo = &vm->page_directory->tbo;
entry->tv.shared = true;
list_add(&entry->tv.head, validated);
}
list = drm_malloc_ab(vm->max_pde_used + 2,
sizeof(struct amdgpu_bo_list_entry));
if (!list) {
return NULL;
}
/**
* amdgpu_vm_get_bos - add the vm BOs to a duplicates list
*
* @vm: vm providing the BOs
* @duplicates: head of duplicates list
*
* Add the page directory to the BO duplicates list
* for command submission.
*/
void amdgpu_vm_get_pt_bos(struct amdgpu_vm *vm, struct list_head *duplicates)
{
unsigned i;
/* add the vm page table to the list */
list[0].robj = vm->page_directory;
list[0].prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
list[0].allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
list[0].priority = 0;
list[0].tv.bo = &vm->page_directory->tbo;
list[0].tv.shared = true;
list_add(&list[0].tv.head, head);
for (i = 0; i <= vm->max_pde_used; ++i) {
struct amdgpu_bo_list_entry *entry = &vm->page_tables[i].entry;
for (i = 0, idx = 1; i <= vm->max_pde_used; i++) {
if (!vm->page_tables[i].bo)
if (!entry->robj)
continue;
list[idx].robj = vm->page_tables[i].bo;
list[idx].prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
list[idx].allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
list[idx].priority = 0;
list[idx].tv.bo = &list[idx].robj->tbo;
list[idx].tv.shared = true;
list_add(&list[idx++].tv.head, head);
list_add(&entry->tv.head, duplicates);
}
return list;
}
/**
* amdgpu_vm_move_pt_bos_in_lru - move the PT BOs to the LRU tail
*
* @adev: amdgpu device instance
* @vm: vm providing the BOs
*
* Move the PT BOs to the tail of the LRU.
*/
void amdgpu_vm_move_pt_bos_in_lru(struct amdgpu_device *adev,
struct amdgpu_vm *vm)
{
struct ttm_bo_global *glob = adev->mman.bdev.glob;
unsigned i;
spin_lock(&glob->lru_lock);
for (i = 0; i <= vm->max_pde_used; ++i) {
struct amdgpu_bo_list_entry *entry = &vm->page_tables[i].entry;
if (!entry->robj)
continue;
ttm_bo_move_to_lru_tail(&entry->robj->tbo);
}
spin_unlock(&glob->lru_lock);
}
/**
@ -461,7 +488,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
/* walk over the address space and update the page directory */
for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
struct amdgpu_bo *bo = vm->page_tables[pt_idx].bo;
struct amdgpu_bo *bo = vm->page_tables[pt_idx].entry.robj;
uint64_t pde, pt;
if (bo == NULL)
@ -638,7 +665,7 @@ static int amdgpu_vm_update_ptes(struct amdgpu_device *adev,
/* walk over the address space and update the page tables */
for (addr = start; addr < end; ) {
uint64_t pt_idx = addr >> amdgpu_vm_block_size;
struct amdgpu_bo *pt = vm->page_tables[pt_idx].bo;
struct amdgpu_bo *pt = vm->page_tables[pt_idx].entry.robj;
unsigned nptes;
uint64_t pte;
int r;
@ -1010,13 +1037,13 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
return -EINVAL;
/* make sure object fit at this offset */
eaddr = saddr + size;
eaddr = saddr + size - 1;
if ((saddr >= eaddr) || (offset + size > amdgpu_bo_size(bo_va->bo)))
return -EINVAL;
last_pfn = eaddr / AMDGPU_GPU_PAGE_SIZE;
if (last_pfn > adev->vm_manager.max_pfn) {
dev_err(adev->dev, "va above limit (0x%08X > 0x%08X)\n",
if (last_pfn >= adev->vm_manager.max_pfn) {
dev_err(adev->dev, "va above limit (0x%08X >= 0x%08X)\n",
last_pfn, adev->vm_manager.max_pfn);
return -EINVAL;
}
@ -1025,7 +1052,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
eaddr /= AMDGPU_GPU_PAGE_SIZE;
spin_lock(&vm->it_lock);
it = interval_tree_iter_first(&vm->va, saddr, eaddr - 1);
it = interval_tree_iter_first(&vm->va, saddr, eaddr);
spin_unlock(&vm->it_lock);
if (it) {
struct amdgpu_bo_va_mapping *tmp;
@ -1046,7 +1073,7 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
INIT_LIST_HEAD(&mapping->list);
mapping->it.start = saddr;
mapping->it.last = eaddr - 1;
mapping->it.last = eaddr;
mapping->offset = offset;
mapping->flags = flags;
@ -1070,9 +1097,11 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
/* walk over the address space and allocate the page tables */
for (pt_idx = saddr; pt_idx <= eaddr; ++pt_idx) {
struct reservation_object *resv = vm->page_directory->tbo.resv;
struct amdgpu_bo_list_entry *entry;
struct amdgpu_bo *pt;
if (vm->page_tables[pt_idx].bo)
entry = &vm->page_tables[pt_idx].entry;
if (entry->robj)
continue;
r = amdgpu_bo_create(adev, AMDGPU_VM_PTE_COUNT * 8,
@ -1094,8 +1123,13 @@ int amdgpu_vm_bo_map(struct amdgpu_device *adev,
goto error_free;
}
entry->robj = pt;
entry->prefered_domains = AMDGPU_GEM_DOMAIN_VRAM;
entry->allowed_domains = AMDGPU_GEM_DOMAIN_VRAM;
entry->priority = 0;
entry->tv.bo = &entry->robj->tbo;
entry->tv.shared = true;
vm->page_tables[pt_idx].addr = 0;
vm->page_tables[pt_idx].bo = pt;
}
return 0;
@ -1326,7 +1360,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
}
for (i = 0; i < amdgpu_vm_num_pdes(adev); i++)
amdgpu_bo_unref(&vm->page_tables[i].bo);
amdgpu_bo_unref(&vm->page_tables[i].entry.robj);
kfree(vm->page_tables);
amdgpu_bo_unref(&vm->page_directory);

View File

@ -243,7 +243,7 @@ static void amdgpu_atombios_dp_get_adjust_train(const u8 link_status[DP_LINK_STA
/* convert bits per color to bits per pixel */
/* get bpc from the EDID */
static int amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
static unsigned amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
{
if (bpc == 0)
return 24;
@ -251,64 +251,32 @@ static int amdgpu_atombios_dp_convert_bpc_to_bpp(int bpc)
return bpc * 3;
}
/* get the max pix clock supported by the link rate and lane num */
static int amdgpu_atombios_dp_get_max_dp_pix_clock(int link_rate,
int lane_num,
int bpp)
{
return (link_rate * lane_num * 8) / bpp;
}
/***** amdgpu specific DP functions *****/
/* First get the min lane# when low rate is used according to pixel clock
* (prefer low rate), second check max lane# supported by DP panel,
* if the max lane# < low rate lane# then use max lane# instead.
*/
static int amdgpu_atombios_dp_get_dp_lane_number(struct drm_connector *connector,
static int amdgpu_atombios_dp_get_dp_link_config(struct drm_connector *connector,
const u8 dpcd[DP_DPCD_SIZE],
int pix_clock)
unsigned pix_clock,
unsigned *dp_lanes, unsigned *dp_rate)
{
int bpp = amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
int max_link_rate = drm_dp_max_link_rate(dpcd);
int max_lane_num = drm_dp_max_lane_count(dpcd);
int lane_num;
int max_dp_pix_clock;
unsigned bpp =
amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
static const unsigned link_rates[3] = { 162000, 270000, 540000 };
unsigned max_link_rate = drm_dp_max_link_rate(dpcd);
unsigned max_lane_num = drm_dp_max_lane_count(dpcd);
unsigned lane_num, i, max_pix_clock;
for (lane_num = 1; lane_num < max_lane_num; lane_num <<= 1) {
max_dp_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(max_link_rate, lane_num, bpp);
if (pix_clock <= max_dp_pix_clock)
break;
for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
max_pix_clock = (lane_num * link_rates[i] * 8) / bpp;
if (max_pix_clock >= pix_clock) {
*dp_lanes = lane_num;
*dp_rate = link_rates[i];
return 0;
}
}
}
return lane_num;
}
static int amdgpu_atombios_dp_get_dp_link_clock(struct drm_connector *connector,
const u8 dpcd[DP_DPCD_SIZE],
int pix_clock)
{
int bpp = amdgpu_atombios_dp_convert_bpc_to_bpp(amdgpu_connector_get_monitor_bpc(connector));
int lane_num, max_pix_clock;
if (amdgpu_connector_encoder_get_dp_bridge_encoder_id(connector) ==
ENCODER_OBJECT_ID_NUTMEG)
return 270000;
lane_num = amdgpu_atombios_dp_get_dp_lane_number(connector, dpcd, pix_clock);
max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(162000, lane_num, bpp);
if (pix_clock <= max_pix_clock)
return 162000;
max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(270000, lane_num, bpp);
if (pix_clock <= max_pix_clock)
return 270000;
if (amdgpu_connector_is_dp12_capable(connector)) {
max_pix_clock = amdgpu_atombios_dp_get_max_dp_pix_clock(540000, lane_num, bpp);
if (pix_clock <= max_pix_clock)
return 540000;
}
return drm_dp_max_link_rate(dpcd);
return -EINVAL;
}
static u8 amdgpu_atombios_dp_encoder_service(struct amdgpu_device *adev,
@ -422,6 +390,7 @@ void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
{
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
struct amdgpu_connector_atom_dig *dig_connector;
int ret;
if (!amdgpu_connector->con_priv)
return;
@ -429,10 +398,14 @@ void amdgpu_atombios_dp_set_link_config(struct drm_connector *connector,
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
(dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) {
dig_connector->dp_clock =
amdgpu_atombios_dp_get_dp_link_clock(connector, dig_connector->dpcd, mode->clock);
dig_connector->dp_lane_count =
amdgpu_atombios_dp_get_dp_lane_number(connector, dig_connector->dpcd, mode->clock);
ret = amdgpu_atombios_dp_get_dp_link_config(connector, dig_connector->dpcd,
mode->clock,
&dig_connector->dp_lane_count,
&dig_connector->dp_clock);
if (ret) {
dig_connector->dp_clock = 0;
dig_connector->dp_lane_count = 0;
}
}
}
@ -441,14 +414,17 @@ int amdgpu_atombios_dp_mode_valid_helper(struct drm_connector *connector,
{
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
struct amdgpu_connector_atom_dig *dig_connector;
int dp_clock;
unsigned dp_lanes, dp_clock;
int ret;
if (!amdgpu_connector->con_priv)
return MODE_CLOCK_HIGH;
dig_connector = amdgpu_connector->con_priv;
dp_clock =
amdgpu_atombios_dp_get_dp_link_clock(connector, dig_connector->dpcd, mode->clock);
ret = amdgpu_atombios_dp_get_dp_link_config(connector, dig_connector->dpcd,
mode->clock, &dp_lanes, &dp_clock);
if (ret)
return MODE_CLOCK_HIGH;
if ((dp_clock == 540000) &&
(!amdgpu_connector_is_dp12_capable(connector)))

View File

@ -1395,7 +1395,6 @@ static void ci_thermal_stop_thermal_controller(struct amdgpu_device *adev)
ci_fan_ctrl_set_default_mode(adev);
}
#if 0
static int ci_read_smc_soft_register(struct amdgpu_device *adev,
u16 reg_offset, u32 *value)
{
@ -1405,7 +1404,6 @@ static int ci_read_smc_soft_register(struct amdgpu_device *adev,
pi->soft_regs_start + reg_offset,
value, pi->sram_end);
}
#endif
static int ci_write_smc_soft_register(struct amdgpu_device *adev,
u16 reg_offset, u32 value)
@ -6084,11 +6082,23 @@ ci_dpm_debugfs_print_current_performance_level(struct amdgpu_device *adev,
struct amdgpu_ps *rps = &pi->current_rps;
u32 sclk = ci_get_average_sclk_freq(adev);
u32 mclk = ci_get_average_mclk_freq(adev);
u32 activity_percent = 50;
int ret;
ret = ci_read_smc_soft_register(adev, offsetof(SMU7_SoftRegisters, AverageGraphicsA),
&activity_percent);
if (ret == 0) {
activity_percent += 0x80;
activity_percent >>= 8;
activity_percent = activity_percent > 100 ? 100 : activity_percent;
}
seq_printf(m, "uvd %sabled\n", pi->uvd_enabled ? "en" : "dis");
seq_printf(m, "vce %sabled\n", rps->vce_active ? "en" : "dis");
seq_printf(m, "power level avg sclk: %u mclk: %u\n",
sclk, mclk);
seq_printf(m, "GPU load: %u %%\n", activity_percent);
}
static void ci_dpm_print_power_state(struct amdgpu_device *adev,

View File

@ -32,6 +32,7 @@
#include "amdgpu_vce.h"
#include "cikd.h"
#include "atom.h"
#include "amd_pcie.h"
#include "cik.h"
#include "gmc_v7_0.h"
@ -65,6 +66,7 @@
#include "oss/oss_2_0_sh_mask.h"
#include "amdgpu_amdkfd.h"
#include "amdgpu_powerplay.h"
/*
* Indirect registers accessor
@ -929,6 +931,37 @@ static bool cik_read_disabled_bios(struct amdgpu_device *adev)
return r;
}
static bool cik_read_bios_from_rom(struct amdgpu_device *adev,
u8 *bios, u32 length_bytes)
{
u32 *dw_ptr;
unsigned long flags;
u32 i, length_dw;
if (bios == NULL)
return false;
if (length_bytes == 0)
return false;
/* APU vbios image is part of sbios image */
if (adev->flags & AMD_IS_APU)
return false;
dw_ptr = (u32 *)bios;
length_dw = ALIGN(length_bytes, 4) / 4;
/* take the smc lock since we are using the smc index */
spin_lock_irqsave(&adev->smc_idx_lock, flags);
/* set rom index to 0 */
WREG32(mmSMC_IND_INDEX_0, ixROM_INDEX);
WREG32(mmSMC_IND_DATA_0, 0);
/* set index to data for continous read */
WREG32(mmSMC_IND_INDEX_0, ixROM_DATA);
for (i = 0; i < length_dw; i++)
dw_ptr[i] = RREG32(mmSMC_IND_DATA_0);
spin_unlock_irqrestore(&adev->smc_idx_lock, flags);
return true;
}
static struct amdgpu_allowed_register_entry cik_allowed_read_registers[] = {
{mmGRBM_STATUS, false},
{mmGB_ADDR_CONFIG, false},
@ -1563,8 +1596,8 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
{
struct pci_dev *root = adev->pdev->bus->self;
int bridge_pos, gpu_pos;
u32 speed_cntl, mask, current_data_rate;
int ret, i;
u32 speed_cntl, current_data_rate;
int i;
u16 tmp16;
if (pci_is_root_bus(adev->pdev->bus))
@ -1576,23 +1609,20 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
if (adev->flags & AMD_IS_APU)
return;
ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
if (ret != 0)
return;
if (!(mask & (DRM_PCIE_SPEED_50 | DRM_PCIE_SPEED_80)))
if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
return;
speed_cntl = RREG32_PCIE(ixPCIE_LC_SPEED_CNTL);
current_data_rate = (speed_cntl & PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE_MASK) >>
PCIE_LC_SPEED_CNTL__LC_CURRENT_DATA_RATE__SHIFT;
if (mask & DRM_PCIE_SPEED_80) {
if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
if (current_data_rate == 2) {
DRM_INFO("PCIE gen 3 link speeds already enabled\n");
return;
}
DRM_INFO("enabling PCIE gen 3 link speeds, disable with amdgpu.pcie_gen2=0\n");
} else if (mask & DRM_PCIE_SPEED_50) {
} else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) {
if (current_data_rate == 1) {
DRM_INFO("PCIE gen 2 link speeds already enabled\n");
return;
@ -1608,7 +1638,7 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
if (!gpu_pos)
return;
if (mask & DRM_PCIE_SPEED_80) {
if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
/* re-try equalization if gen3 is not already enabled */
if (current_data_rate != 2) {
u16 bridge_cfg, gpu_cfg;
@ -1703,9 +1733,9 @@ static void cik_pcie_gen3_enable(struct amdgpu_device *adev)
pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16);
tmp16 &= ~0xf;
if (mask & DRM_PCIE_SPEED_80)
if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
tmp16 |= 3; /* gen3 */
else if (mask & DRM_PCIE_SPEED_50)
else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
tmp16 |= 2; /* gen2 */
else
tmp16 |= 1; /* gen1 */
@ -1922,7 +1952,7 @@ static const struct amdgpu_ip_block_version bonaire_ip_blocks[] =
.major = 7,
.minor = 0,
.rev = 0,
.funcs = &ci_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -1990,7 +2020,7 @@ static const struct amdgpu_ip_block_version hawaii_ip_blocks[] =
.major = 7,
.minor = 0,
.rev = 0,
.funcs = &ci_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -2058,7 +2088,7 @@ static const struct amdgpu_ip_block_version kabini_ip_blocks[] =
.major = 7,
.minor = 0,
.rev = 0,
.funcs = &kv_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -2126,7 +2156,7 @@ static const struct amdgpu_ip_block_version mullins_ip_blocks[] =
.major = 7,
.minor = 0,
.rev = 0,
.funcs = &kv_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -2194,7 +2224,7 @@ static const struct amdgpu_ip_block_version kaveri_ip_blocks[] =
.major = 7,
.minor = 0,
.rev = 0,
.funcs = &kv_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -2267,6 +2297,7 @@ int cik_set_ip_blocks(struct amdgpu_device *adev)
static const struct amdgpu_asic_funcs cik_asic_funcs =
{
.read_disabled_bios = &cik_read_disabled_bios,
.read_bios_from_rom = &cik_read_bios_from_rom,
.read_register = &cik_read_register,
.reset = &cik_asic_reset,
.set_vga_state = &cik_vga_set_state,
@ -2417,6 +2448,8 @@ static int cik_common_early_init(void *handle)
return -EINVAL;
}
amdgpu_get_pcie_info(adev);
return 0;
}

View File

@ -274,6 +274,11 @@ static void cik_ih_set_rptr(struct amdgpu_device *adev)
static int cik_ih_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int ret;
ret = amdgpu_irq_add_domain(adev);
if (ret)
return ret;
cik_ih_set_interrupt_funcs(adev);
@ -300,6 +305,7 @@ static int cik_ih_sw_fini(void *handle)
amdgpu_irq_fini(adev);
amdgpu_ih_ring_fini(adev);
amdgpu_irq_remove_domain(adev);
return 0;
}

View File

@ -1078,6 +1078,37 @@ static uint32_t cz_get_eclk_level(struct amdgpu_device *adev,
return i;
}
static uint32_t cz_get_uvd_level(struct amdgpu_device *adev,
uint32_t clock, uint16_t msg)
{
int i = 0;
struct amdgpu_uvd_clock_voltage_dependency_table *table =
&adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table;
switch (msg) {
case PPSMC_MSG_SetUvdSoftMin:
case PPSMC_MSG_SetUvdHardMin:
for (i = 0; i < table->count; i++)
if (clock <= table->entries[i].vclk)
break;
if (i == table->count)
i = table->count - 1;
break;
case PPSMC_MSG_SetUvdSoftMax:
case PPSMC_MSG_SetUvdHardMax:
for (i = table->count - 1; i >= 0; i--)
if (clock >= table->entries[i].vclk)
break;
if (i < 0)
i = 0;
break;
default:
break;
}
return i;
}
static int cz_program_bootup_state(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
@ -1739,6 +1770,200 @@ static int cz_dpm_unforce_dpm_levels(struct amdgpu_device *adev)
return 0;
}
static int cz_dpm_uvd_force_highest(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
int ret = 0;
if (pi->uvd_dpm.soft_min_clk != pi->uvd_dpm.soft_max_clk) {
pi->uvd_dpm.soft_min_clk =
pi->uvd_dpm.soft_max_clk;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetUvdSoftMin,
cz_get_uvd_level(adev,
pi->uvd_dpm.soft_min_clk,
PPSMC_MSG_SetUvdSoftMin));
if (ret)
return ret;
}
return ret;
}
static int cz_dpm_uvd_force_lowest(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
int ret = 0;
if (pi->uvd_dpm.soft_max_clk != pi->uvd_dpm.soft_min_clk) {
pi->uvd_dpm.soft_max_clk = pi->uvd_dpm.soft_min_clk;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetUvdSoftMax,
cz_get_uvd_level(adev,
pi->uvd_dpm.soft_max_clk,
PPSMC_MSG_SetUvdSoftMax));
if (ret)
return ret;
}
return ret;
}
static uint32_t cz_dpm_get_max_uvd_level(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
if (!pi->max_uvd_level) {
cz_send_msg_to_smc(adev, PPSMC_MSG_GetMaxUvdLevel);
pi->max_uvd_level = cz_get_argument(adev) + 1;
}
if (pi->max_uvd_level > CZ_MAX_HARDWARE_POWERLEVELS) {
DRM_ERROR("Invalid max uvd level!\n");
return -EINVAL;
}
return pi->max_uvd_level;
}
static int cz_dpm_unforce_uvd_dpm_levels(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
struct amdgpu_uvd_clock_voltage_dependency_table *dep_table =
&adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table;
uint32_t level = 0;
int ret = 0;
pi->uvd_dpm.soft_min_clk = dep_table->entries[0].vclk;
level = cz_dpm_get_max_uvd_level(adev) - 1;
if (level < dep_table->count)
pi->uvd_dpm.soft_max_clk = dep_table->entries[level].vclk;
else
pi->uvd_dpm.soft_max_clk =
dep_table->entries[dep_table->count - 1].vclk;
/* get min/max sclk soft value
* notify SMU to execute */
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetUvdSoftMin,
cz_get_uvd_level(adev,
pi->uvd_dpm.soft_min_clk,
PPSMC_MSG_SetUvdSoftMin));
if (ret)
return ret;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetUvdSoftMax,
cz_get_uvd_level(adev,
pi->uvd_dpm.soft_max_clk,
PPSMC_MSG_SetUvdSoftMax));
if (ret)
return ret;
DRM_DEBUG("DPM uvd unforce state min=%d, max=%d.\n",
pi->uvd_dpm.soft_min_clk,
pi->uvd_dpm.soft_max_clk);
return 0;
}
static int cz_dpm_vce_force_highest(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
int ret = 0;
if (pi->vce_dpm.soft_min_clk != pi->vce_dpm.soft_max_clk) {
pi->vce_dpm.soft_min_clk =
pi->vce_dpm.soft_max_clk;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetEclkSoftMin,
cz_get_eclk_level(adev,
pi->vce_dpm.soft_min_clk,
PPSMC_MSG_SetEclkSoftMin));
if (ret)
return ret;
}
return ret;
}
static int cz_dpm_vce_force_lowest(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
int ret = 0;
if (pi->vce_dpm.soft_max_clk != pi->vce_dpm.soft_min_clk) {
pi->vce_dpm.soft_max_clk = pi->vce_dpm.soft_min_clk;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetEclkSoftMax,
cz_get_uvd_level(adev,
pi->vce_dpm.soft_max_clk,
PPSMC_MSG_SetEclkSoftMax));
if (ret)
return ret;
}
return ret;
}
static uint32_t cz_dpm_get_max_vce_level(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
if (!pi->max_vce_level) {
cz_send_msg_to_smc(adev, PPSMC_MSG_GetMaxEclkLevel);
pi->max_vce_level = cz_get_argument(adev) + 1;
}
if (pi->max_vce_level > CZ_MAX_HARDWARE_POWERLEVELS) {
DRM_ERROR("Invalid max vce level!\n");
return -EINVAL;
}
return pi->max_vce_level;
}
static int cz_dpm_unforce_vce_dpm_levels(struct amdgpu_device *adev)
{
struct cz_power_info *pi = cz_get_pi(adev);
struct amdgpu_vce_clock_voltage_dependency_table *dep_table =
&adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table;
uint32_t level = 0;
int ret = 0;
pi->vce_dpm.soft_min_clk = dep_table->entries[0].ecclk;
level = cz_dpm_get_max_vce_level(adev) - 1;
if (level < dep_table->count)
pi->vce_dpm.soft_max_clk = dep_table->entries[level].ecclk;
else
pi->vce_dpm.soft_max_clk =
dep_table->entries[dep_table->count - 1].ecclk;
/* get min/max sclk soft value
* notify SMU to execute */
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetEclkSoftMin,
cz_get_eclk_level(adev,
pi->vce_dpm.soft_min_clk,
PPSMC_MSG_SetEclkSoftMin));
if (ret)
return ret;
ret = cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetEclkSoftMax,
cz_get_eclk_level(adev,
pi->vce_dpm.soft_max_clk,
PPSMC_MSG_SetEclkSoftMax));
if (ret)
return ret;
DRM_DEBUG("DPM vce unforce state min=%d, max=%d.\n",
pi->vce_dpm.soft_min_clk,
pi->vce_dpm.soft_max_clk);
return 0;
}
static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,
enum amdgpu_dpm_forced_level level)
{
@ -1746,23 +1971,68 @@ static int cz_dpm_force_dpm_level(struct amdgpu_device *adev,
switch (level) {
case AMDGPU_DPM_FORCED_LEVEL_HIGH:
/* sclk */
ret = cz_dpm_unforce_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_force_highest(adev);
if (ret)
return ret;
/* uvd */
ret = cz_dpm_unforce_uvd_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_uvd_force_highest(adev);
if (ret)
return ret;
/* vce */
ret = cz_dpm_unforce_vce_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_vce_force_highest(adev);
if (ret)
return ret;
break;
case AMDGPU_DPM_FORCED_LEVEL_LOW:
/* sclk */
ret = cz_dpm_unforce_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_force_lowest(adev);
if (ret)
return ret;
/* uvd */
ret = cz_dpm_unforce_uvd_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_uvd_force_lowest(adev);
if (ret)
return ret;
/* vce */
ret = cz_dpm_unforce_vce_dpm_levels(adev);
if (ret)
return ret;
ret = cz_dpm_vce_force_lowest(adev);
if (ret)
return ret;
break;
case AMDGPU_DPM_FORCED_LEVEL_AUTO:
/* sclk */
ret = cz_dpm_unforce_dpm_levels(adev);
if (ret)
return ret;
/* uvd */
ret = cz_dpm_unforce_uvd_dpm_levels(adev);
if (ret)
return ret;
/* vce */
ret = cz_dpm_unforce_vce_dpm_levels(adev);
if (ret)
return ret;
break;
@ -1905,7 +2175,8 @@ static int cz_update_vce_dpm(struct amdgpu_device *adev)
pi->vce_dpm.hard_min_clk = table->entries[table->count-1].ecclk;
} else { /* non-stable p-state cases. without vce.Arbiter.EcclkHardMin */
pi->vce_dpm.hard_min_clk = table->entries[0].ecclk;
/* leave it as set by user */
/*pi->vce_dpm.hard_min_clk = table->entries[0].ecclk;*/
}
cz_send_msg_to_smc_with_parameter(adev,

View File

@ -183,6 +183,8 @@ struct cz_power_info {
uint32_t voltage_drop_threshold;
uint32_t gfx_pg_threshold;
uint32_t max_sclk_level;
uint32_t max_uvd_level;
uint32_t max_vce_level;
/* flags */
bool didt_enabled;
bool video_start;

View File

@ -253,8 +253,14 @@ static void cz_ih_set_rptr(struct amdgpu_device *adev)
static int cz_ih_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int ret;
ret = amdgpu_irq_add_domain(adev);
if (ret)
return ret;
cz_ih_set_interrupt_funcs(adev);
return 0;
}
@ -278,6 +284,7 @@ static int cz_ih_sw_fini(void *handle)
amdgpu_irq_fini(adev);
amdgpu_ih_ring_fini(adev);
amdgpu_irq_remove_domain(adev);
return 0;
}

View File

@ -3729,7 +3729,7 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
drm_encoder_helper_add(encoder, &dce_v10_0_dac_helper_funcs);
break;
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@ -3740,15 +3740,15 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
amdgpu_encoder->rmx_type = RMX_FULL;
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
} else {
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
}
drm_encoder_helper_add(encoder, &dce_v10_0_dig_helper_funcs);
@ -3766,13 +3766,13 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev,
amdgpu_encoder->is_ext_encoder = true;
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
else
drm_encoder_init(dev, encoder, &dce_v10_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
drm_encoder_helper_add(encoder, &dce_v10_0_ext_helper_funcs);
break;
}

View File

@ -211,9 +211,9 @@ static bool dce_v11_0_is_counter_moving(struct amdgpu_device *adev, int crtc)
*/
static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
{
unsigned i = 0;
unsigned i = 100;
if (crtc >= adev->mode_info.num_crtc)
if (crtc < 0 || crtc >= adev->mode_info.num_crtc)
return;
if (!(RREG32(mmCRTC_CONTROL + crtc_offsets[crtc]) & CRTC_CONTROL__CRTC_MASTER_EN_MASK))
@ -223,14 +223,16 @@ static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
* wait for another frame.
*/
while (dce_v11_0_is_in_vblank(adev, crtc)) {
if (i++ % 100 == 0) {
if (i++ == 100) {
i = 0;
if (!dce_v11_0_is_counter_moving(adev, crtc))
break;
}
}
while (!dce_v11_0_is_in_vblank(adev, crtc)) {
if (i++ % 100 == 0) {
if (i++ == 100) {
i = 0;
if (!dce_v11_0_is_counter_moving(adev, crtc))
break;
}
@ -239,7 +241,7 @@ static void dce_v11_0_vblank_wait(struct amdgpu_device *adev, int crtc)
static u32 dce_v11_0_vblank_get_counter(struct amdgpu_device *adev, int crtc)
{
if (crtc >= adev->mode_info.num_crtc)
if (crtc < 0 || crtc >= adev->mode_info.num_crtc)
return 0;
else
return RREG32(mmCRTC_STATUS_FRAME_COUNT + crtc_offsets[crtc]);
@ -3384,7 +3386,7 @@ static void dce_v11_0_crtc_vblank_int_ack(struct amdgpu_device *adev,
{
u32 tmp;
if (crtc >= adev->mode_info.num_crtc) {
if (crtc < 0 || crtc >= adev->mode_info.num_crtc) {
DRM_DEBUG("invalid crtc %d\n", crtc);
return;
}
@ -3399,7 +3401,7 @@ static void dce_v11_0_crtc_vline_int_ack(struct amdgpu_device *adev,
{
u32 tmp;
if (crtc >= adev->mode_info.num_crtc) {
if (crtc < 0 || crtc >= adev->mode_info.num_crtc) {
DRM_DEBUG("invalid crtc %d\n", crtc);
return;
}
@ -3722,7 +3724,7 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
drm_encoder_helper_add(encoder, &dce_v11_0_dac_helper_funcs);
break;
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@ -3733,15 +3735,15 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
amdgpu_encoder->rmx_type = RMX_FULL;
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
} else {
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
}
drm_encoder_helper_add(encoder, &dce_v11_0_dig_helper_funcs);
@ -3759,13 +3761,13 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
amdgpu_encoder->is_ext_encoder = true;
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
else
drm_encoder_init(dev, encoder, &dce_v11_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
drm_encoder_helper_add(encoder, &dce_v11_0_ext_helper_funcs);
break;
}

View File

@ -3659,7 +3659,7 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
drm_encoder_helper_add(encoder, &dce_v8_0_dac_helper_funcs);
break;
case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1:
@ -3670,15 +3670,15 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
amdgpu_encoder->rmx_type = RMX_FULL;
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_lcd_info(amdgpu_encoder);
} else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT)) {
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
} else {
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
amdgpu_encoder->enc_priv = amdgpu_atombios_encoder_get_dig_info(amdgpu_encoder);
}
drm_encoder_helper_add(encoder, &dce_v8_0_dig_helper_funcs);
@ -3696,13 +3696,13 @@ static void dce_v8_0_encoder_add(struct amdgpu_device *adev,
amdgpu_encoder->is_ext_encoder = true;
if (amdgpu_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_LVDS);
DRM_MODE_ENCODER_LVDS, NULL);
else if (amdgpu_encoder->devices & (ATOM_DEVICE_CRT_SUPPORT))
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_DAC);
DRM_MODE_ENCODER_DAC, NULL);
else
drm_encoder_init(dev, encoder, &dce_v8_0_encoder_funcs,
DRM_MODE_ENCODER_TMDS);
DRM_MODE_ENCODER_TMDS, NULL);
drm_encoder_helper_add(encoder, &dce_v8_0_ext_helper_funcs);
break;
}

View File

@ -24,7 +24,7 @@
#include <linux/firmware.h>
#include "drmP.h"
#include "amdgpu.h"
#include "fiji_smumgr.h"
#include "fiji_smum.h"
MODULE_FIRMWARE("amdgpu/fiji_smc.bin");

View File

@ -1,182 +0,0 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef FIJI_PP_SMC_H
#define FIJI_PP_SMC_H
#pragma pack(push, 1)
#define PPSMC_SWSTATE_FLAG_DC 0x01
#define PPSMC_SWSTATE_FLAG_UVD 0x02
#define PPSMC_SWSTATE_FLAG_VCE 0x04
#define PPSMC_THERMAL_PROTECT_TYPE_INTERNAL 0x00
#define PPSMC_THERMAL_PROTECT_TYPE_EXTERNAL 0x01
#define PPSMC_THERMAL_PROTECT_TYPE_NONE 0xff
#define PPSMC_SYSTEMFLAG_GPIO_DC 0x01
#define PPSMC_SYSTEMFLAG_STEPVDDC 0x02
#define PPSMC_SYSTEMFLAG_GDDR5 0x04
#define PPSMC_SYSTEMFLAG_DISABLE_BABYSTEP 0x08
#define PPSMC_SYSTEMFLAG_REGULATOR_HOT 0x10
#define PPSMC_SYSTEMFLAG_REGULATOR_HOT_ANALOG 0x20
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_MASK 0x07
#define PPSMC_EXTRAFLAGS_AC2DC_DONT_WAIT_FOR_VBLANK 0x08
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTODPMLOWSTATE 0x00
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTOINITIALSTATE 0x01
#define PPSMC_DPM2FLAGS_TDPCLMP 0x01
#define PPSMC_DPM2FLAGS_PWRSHFT 0x02
#define PPSMC_DPM2FLAGS_OCP 0x04
#define PPSMC_DISPLAY_WATERMARK_LOW 0
#define PPSMC_DISPLAY_WATERMARK_HIGH 1
#define PPSMC_STATEFLAG_AUTO_PULSE_SKIP 0x01
#define PPSMC_STATEFLAG_POWERBOOST 0x02
#define PPSMC_STATEFLAG_PSKIP_ON_TDP_FAULT 0x04
#define PPSMC_STATEFLAG_POWERSHIFT 0x08
#define PPSMC_STATEFLAG_SLOW_READ_MARGIN 0x10
#define PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE 0x20
#define PPSMC_STATEFLAG_DEEPSLEEP_BYPASS 0x40
#define FDO_MODE_HARDWARE 0
#define FDO_MODE_PIECE_WISE_LINEAR 1
enum FAN_CONTROL {
FAN_CONTROL_FUZZY,
FAN_CONTROL_TABLE
};
//Gemini Modes
#define PPSMC_GeminiModeNone 0 //Single GPU board
#define PPSMC_GeminiModeMaster 1 //Master GPU on a Gemini board
#define PPSMC_GeminiModeSlave 2 //Slave GPU on a Gemini board
#define PPSMC_Result_OK ((uint16_t)0x01)
#define PPSMC_Result_NoMore ((uint16_t)0x02)
#define PPSMC_Result_NotNow ((uint16_t)0x03)
#define PPSMC_Result_Failed ((uint16_t)0xFF)
#define PPSMC_Result_UnknownCmd ((uint16_t)0xFE)
#define PPSMC_Result_UnknownVT ((uint16_t)0xFD)
typedef uint16_t PPSMC_Result;
#define PPSMC_isERROR(x) ((uint16_t)0x80 & (x))
#define PPSMC_MSG_Halt ((uint16_t)0x10)
#define PPSMC_MSG_Resume ((uint16_t)0x11)
#define PPSMC_MSG_EnableDPMLevel ((uint16_t)0x12)
#define PPSMC_MSG_ZeroLevelsDisabled ((uint16_t)0x13)
#define PPSMC_MSG_OneLevelsDisabled ((uint16_t)0x14)
#define PPSMC_MSG_TwoLevelsDisabled ((uint16_t)0x15)
#define PPSMC_MSG_EnableThermalInterrupt ((uint16_t)0x16)
#define PPSMC_MSG_RunningOnAC ((uint16_t)0x17)
#define PPSMC_MSG_LevelUp ((uint16_t)0x18)
#define PPSMC_MSG_LevelDown ((uint16_t)0x19)
#define PPSMC_MSG_ResetDPMCounters ((uint16_t)0x1a)
#define PPSMC_MSG_SwitchToSwState ((uint16_t)0x20)
#define PPSMC_MSG_SwitchToSwStateLast ((uint16_t)0x3f)
#define PPSMC_MSG_SwitchToInitialState ((uint16_t)0x40)
#define PPSMC_MSG_NoForcedLevel ((uint16_t)0x41)
#define PPSMC_MSG_ForceHigh ((uint16_t)0x42)
#define PPSMC_MSG_ForceMediumOrHigh ((uint16_t)0x43)
#define PPSMC_MSG_SwitchToMinimumPower ((uint16_t)0x51)
#define PPSMC_MSG_ResumeFromMinimumPower ((uint16_t)0x52)
#define PPSMC_MSG_EnableCac ((uint16_t)0x53)
#define PPSMC_MSG_DisableCac ((uint16_t)0x54)
#define PPSMC_DPMStateHistoryStart ((uint16_t)0x55)
#define PPSMC_DPMStateHistoryStop ((uint16_t)0x56)
#define PPSMC_CACHistoryStart ((uint16_t)0x57)
#define PPSMC_CACHistoryStop ((uint16_t)0x58)
#define PPSMC_TDPClampingActive ((uint16_t)0x59)
#define PPSMC_TDPClampingInactive ((uint16_t)0x5A)
#define PPSMC_StartFanControl ((uint16_t)0x5B)
#define PPSMC_StopFanControl ((uint16_t)0x5C)
#define PPSMC_NoDisplay ((uint16_t)0x5D)
#define PPSMC_HasDisplay ((uint16_t)0x5E)
#define PPSMC_MSG_UVDPowerOFF ((uint16_t)0x60)
#define PPSMC_MSG_UVDPowerON ((uint16_t)0x61)
#define PPSMC_MSG_EnableULV ((uint16_t)0x62)
#define PPSMC_MSG_DisableULV ((uint16_t)0x63)
#define PPSMC_MSG_EnterULV ((uint16_t)0x64)
#define PPSMC_MSG_ExitULV ((uint16_t)0x65)
#define PPSMC_PowerShiftActive ((uint16_t)0x6A)
#define PPSMC_PowerShiftInactive ((uint16_t)0x6B)
#define PPSMC_OCPActive ((uint16_t)0x6C)
#define PPSMC_OCPInactive ((uint16_t)0x6D)
#define PPSMC_CACLongTermAvgEnable ((uint16_t)0x6E)
#define PPSMC_CACLongTermAvgDisable ((uint16_t)0x6F)
#define PPSMC_MSG_InferredStateSweep_Start ((uint16_t)0x70)
#define PPSMC_MSG_InferredStateSweep_Stop ((uint16_t)0x71)
#define PPSMC_MSG_SwitchToLowestInfState ((uint16_t)0x72)
#define PPSMC_MSG_SwitchToNonInfState ((uint16_t)0x73)
#define PPSMC_MSG_AllStateSweep_Start ((uint16_t)0x74)
#define PPSMC_MSG_AllStateSweep_Stop ((uint16_t)0x75)
#define PPSMC_MSG_SwitchNextLowerInfState ((uint16_t)0x76)
#define PPSMC_MSG_SwitchNextHigherInfState ((uint16_t)0x77)
#define PPSMC_MSG_MclkRetrainingTest ((uint16_t)0x78)
#define PPSMC_MSG_ForceTDPClamping ((uint16_t)0x79)
#define PPSMC_MSG_CollectCAC_PowerCorreln ((uint16_t)0x7A)
#define PPSMC_MSG_CollectCAC_WeightCalib ((uint16_t)0x7B)
#define PPSMC_MSG_CollectCAC_SQonly ((uint16_t)0x7C)
#define PPSMC_MSG_CollectCAC_TemperaturePwr ((uint16_t)0x7D)
#define PPSMC_MSG_ExtremitiesTest_Start ((uint16_t)0x7E)
#define PPSMC_MSG_ExtremitiesTest_Stop ((uint16_t)0x7F)
#define PPSMC_FlushDataCache ((uint16_t)0x80)
#define PPSMC_FlushInstrCache ((uint16_t)0x81)
#define PPSMC_MSG_SetEnabledLevels ((uint16_t)0x82)
#define PPSMC_MSG_SetForcedLevels ((uint16_t)0x83)
#define PPSMC_MSG_ResetToDefaults ((uint16_t)0x84)
#define PPSMC_MSG_SetForcedLevelsAndJump ((uint16_t)0x85)
#define PPSMC_MSG_SetCACHistoryMode ((uint16_t)0x86)
#define PPSMC_MSG_EnableDTE ((uint16_t)0x87)
#define PPSMC_MSG_DisableDTE ((uint16_t)0x88)
#define PPSMC_MSG_SmcSpaceSetAddress ((uint16_t)0x89)
#define PPSMC_MSG_SmcSpaceWriteDWordInc ((uint16_t)0x8A)
#define PPSMC_MSG_SmcSpaceWriteWordInc ((uint16_t)0x8B)
#define PPSMC_MSG_SmcSpaceWriteByteInc ((uint16_t)0x8C)
#define PPSMC_MSG_BREAK ((uint16_t)0xF8)
#define PPSMC_MSG_Test ((uint16_t)0x100)
#define PPSMC_MSG_DRV_DRAM_ADDR_HI ((uint16_t)0x250)
#define PPSMC_MSG_DRV_DRAM_ADDR_LO ((uint16_t)0x251)
#define PPSMC_MSG_SMU_DRAM_ADDR_HI ((uint16_t)0x252)
#define PPSMC_MSG_SMU_DRAM_ADDR_LO ((uint16_t)0x253)
#define PPSMC_MSG_LoadUcodes ((uint16_t)0x254)
typedef uint16_t PPSMC_Msg;
#define PPSMC_EVENT_STATUS_THERMAL 0x00000001
#define PPSMC_EVENT_STATUS_REGULATORHOT 0x00000002
#define PPSMC_EVENT_STATUS_DC 0x00000004
#define PPSMC_EVENT_STATUS_GPIO17 0x00000008
#pragma pack(pop)
#endif

View File

@ -25,7 +25,7 @@
#include "drmP.h"
#include "amdgpu.h"
#include "fiji_ppsmc.h"
#include "fiji_smumgr.h"
#include "fiji_smum.h"
#include "smu_ucode_xfer_vi.h"
#include "amdgpu_ucode.h"

File diff suppressed because it is too large Load Diff

View File

@ -370,6 +370,10 @@ static int gmc_v7_0_mc_init(struct amdgpu_device *adev)
adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
adev->mc.visible_vram_size = adev->mc.aper_size;
/* In case the PCI BAR is larger than the actual amount of vram */
if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
adev->mc.visible_vram_size = adev->mc.real_vram_size;
/* unless the user had overridden it, set the gart
* size equal to the 1024 or vram, whichever is larger.
*/
@ -1012,7 +1016,6 @@ static int gmc_v7_0_suspend(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->vm_manager.enabled) {
amdgpu_vm_manager_fini(adev);
gmc_v7_0_vm_fini(adev);
adev->vm_manager.enabled = false;
}

View File

@ -476,6 +476,10 @@ static int gmc_v8_0_mc_init(struct amdgpu_device *adev)
adev->mc.real_vram_size = RREG32(mmCONFIG_MEMSIZE) * 1024ULL * 1024ULL;
adev->mc.visible_vram_size = adev->mc.aper_size;
/* In case the PCI BAR is larger than the actual amount of vram */
if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
adev->mc.visible_vram_size = adev->mc.real_vram_size;
/* unless the user had overridden it, set the gart
* size equal to the 1024 or vram, whichever is larger.
*/
@ -1033,7 +1037,6 @@ static int gmc_v8_0_suspend(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
if (adev->vm_manager.enabled) {
amdgpu_vm_manager_fini(adev);
gmc_v8_0_vm_fini(adev);
adev->vm_manager.enabled = false;
}
@ -1324,9 +1327,181 @@ static int gmc_v8_0_process_interrupt(struct amdgpu_device *adev,
return 0;
}
static void fiji_update_mc_medium_grain_clock_gating(struct amdgpu_device *adev,
bool enable)
{
uint32_t data;
if (enable) {
data = RREG32(mmMC_HUB_MISC_HUB_CG);
data |= MC_HUB_MISC_HUB_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_HUB_CG, data);
data = RREG32(mmMC_HUB_MISC_SIP_CG);
data |= MC_HUB_MISC_SIP_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_SIP_CG, data);
data = RREG32(mmMC_HUB_MISC_VM_CG);
data |= MC_HUB_MISC_VM_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_VM_CG, data);
data = RREG32(mmMC_XPB_CLK_GAT);
data |= MC_XPB_CLK_GAT__ENABLE_MASK;
WREG32(mmMC_XPB_CLK_GAT, data);
data = RREG32(mmATC_MISC_CG);
data |= ATC_MISC_CG__ENABLE_MASK;
WREG32(mmATC_MISC_CG, data);
data = RREG32(mmMC_CITF_MISC_WR_CG);
data |= MC_CITF_MISC_WR_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_WR_CG, data);
data = RREG32(mmMC_CITF_MISC_RD_CG);
data |= MC_CITF_MISC_RD_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_RD_CG, data);
data = RREG32(mmMC_CITF_MISC_VM_CG);
data |= MC_CITF_MISC_VM_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_VM_CG, data);
data = RREG32(mmVM_L2_CG);
data |= VM_L2_CG__ENABLE_MASK;
WREG32(mmVM_L2_CG, data);
} else {
data = RREG32(mmMC_HUB_MISC_HUB_CG);
data &= ~MC_HUB_MISC_HUB_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_HUB_CG, data);
data = RREG32(mmMC_HUB_MISC_SIP_CG);
data &= ~MC_HUB_MISC_SIP_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_SIP_CG, data);
data = RREG32(mmMC_HUB_MISC_VM_CG);
data &= ~MC_HUB_MISC_VM_CG__ENABLE_MASK;
WREG32(mmMC_HUB_MISC_VM_CG, data);
data = RREG32(mmMC_XPB_CLK_GAT);
data &= ~MC_XPB_CLK_GAT__ENABLE_MASK;
WREG32(mmMC_XPB_CLK_GAT, data);
data = RREG32(mmATC_MISC_CG);
data &= ~ATC_MISC_CG__ENABLE_MASK;
WREG32(mmATC_MISC_CG, data);
data = RREG32(mmMC_CITF_MISC_WR_CG);
data &= ~MC_CITF_MISC_WR_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_WR_CG, data);
data = RREG32(mmMC_CITF_MISC_RD_CG);
data &= ~MC_CITF_MISC_RD_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_RD_CG, data);
data = RREG32(mmMC_CITF_MISC_VM_CG);
data &= ~MC_CITF_MISC_VM_CG__ENABLE_MASK;
WREG32(mmMC_CITF_MISC_VM_CG, data);
data = RREG32(mmVM_L2_CG);
data &= ~VM_L2_CG__ENABLE_MASK;
WREG32(mmVM_L2_CG, data);
}
}
static void fiji_update_mc_light_sleep(struct amdgpu_device *adev,
bool enable)
{
uint32_t data;
if (enable) {
data = RREG32(mmMC_HUB_MISC_HUB_CG);
data |= MC_HUB_MISC_HUB_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_HUB_CG, data);
data = RREG32(mmMC_HUB_MISC_SIP_CG);
data |= MC_HUB_MISC_SIP_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_SIP_CG, data);
data = RREG32(mmMC_HUB_MISC_VM_CG);
data |= MC_HUB_MISC_VM_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_VM_CG, data);
data = RREG32(mmMC_XPB_CLK_GAT);
data |= MC_XPB_CLK_GAT__MEM_LS_ENABLE_MASK;
WREG32(mmMC_XPB_CLK_GAT, data);
data = RREG32(mmATC_MISC_CG);
data |= ATC_MISC_CG__MEM_LS_ENABLE_MASK;
WREG32(mmATC_MISC_CG, data);
data = RREG32(mmMC_CITF_MISC_WR_CG);
data |= MC_CITF_MISC_WR_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_WR_CG, data);
data = RREG32(mmMC_CITF_MISC_RD_CG);
data |= MC_CITF_MISC_RD_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_RD_CG, data);
data = RREG32(mmMC_CITF_MISC_VM_CG);
data |= MC_CITF_MISC_VM_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_VM_CG, data);
data = RREG32(mmVM_L2_CG);
data |= VM_L2_CG__MEM_LS_ENABLE_MASK;
WREG32(mmVM_L2_CG, data);
} else {
data = RREG32(mmMC_HUB_MISC_HUB_CG);
data &= ~MC_HUB_MISC_HUB_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_HUB_CG, data);
data = RREG32(mmMC_HUB_MISC_SIP_CG);
data &= ~MC_HUB_MISC_SIP_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_SIP_CG, data);
data = RREG32(mmMC_HUB_MISC_VM_CG);
data &= ~MC_HUB_MISC_VM_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_HUB_MISC_VM_CG, data);
data = RREG32(mmMC_XPB_CLK_GAT);
data &= ~MC_XPB_CLK_GAT__MEM_LS_ENABLE_MASK;
WREG32(mmMC_XPB_CLK_GAT, data);
data = RREG32(mmATC_MISC_CG);
data &= ~ATC_MISC_CG__MEM_LS_ENABLE_MASK;
WREG32(mmATC_MISC_CG, data);
data = RREG32(mmMC_CITF_MISC_WR_CG);
data &= ~MC_CITF_MISC_WR_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_WR_CG, data);
data = RREG32(mmMC_CITF_MISC_RD_CG);
data &= ~MC_CITF_MISC_RD_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_RD_CG, data);
data = RREG32(mmMC_CITF_MISC_VM_CG);
data &= ~MC_CITF_MISC_VM_CG__MEM_LS_ENABLE_MASK;
WREG32(mmMC_CITF_MISC_VM_CG, data);
data = RREG32(mmVM_L2_CG);
data &= ~VM_L2_CG__MEM_LS_ENABLE_MASK;
WREG32(mmVM_L2_CG, data);
}
}
static int gmc_v8_0_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
switch (adev->asic_type) {
case CHIP_FIJI:
fiji_update_mc_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
fiji_update_mc_light_sleep(adev,
state == AMD_CG_STATE_GATE ? true : false);
break;
default:
break;
}
return 0;
}

View File

@ -253,8 +253,14 @@ static void iceland_ih_set_rptr(struct amdgpu_device *adev)
static int iceland_ih_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int ret;
ret = amdgpu_irq_add_domain(adev);
if (ret)
return ret;
iceland_ih_set_interrupt_funcs(adev);
return 0;
}
@ -278,6 +284,7 @@ static int iceland_ih_sw_fini(void *handle)
amdgpu_irq_fini(adev);
amdgpu_ih_ring_fini(adev);
amdgpu_irq_remove_domain(adev);
return 0;
}

View File

@ -727,18 +727,20 @@ static int sdma_v3_0_start(struct amdgpu_device *adev)
{
int r, i;
if (!adev->firmware.smu_load) {
r = sdma_v3_0_load_microcode(adev);
if (r)
return r;
} else {
for (i = 0; i < adev->sdma.num_instances; i++) {
r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
(i == 0) ?
AMDGPU_UCODE_ID_SDMA0 :
AMDGPU_UCODE_ID_SDMA1);
if (!adev->pp_enabled) {
if (!adev->firmware.smu_load) {
r = sdma_v3_0_load_microcode(adev);
if (r)
return -EINVAL;
return r;
} else {
for (i = 0; i < adev->sdma.num_instances; i++) {
r = adev->smu.smumgr_funcs->check_fw_load_finish(adev,
(i == 0) ?
AMDGPU_UCODE_ID_SDMA0 :
AMDGPU_UCODE_ID_SDMA1);
if (r)
return -EINVAL;
}
}
}
@ -1427,9 +1429,114 @@ static int sdma_v3_0_process_illegal_inst_irq(struct amdgpu_device *adev,
return 0;
}
static void fiji_update_sdma_medium_grain_clock_gating(
struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
if (enable) {
temp = data = RREG32(mmSDMA0_CLK_CTRL);
data &= ~(SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
if (data != temp)
WREG32(mmSDMA0_CLK_CTRL, data);
temp = data = RREG32(mmSDMA1_CLK_CTRL);
data &= ~(SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK);
if (data != temp)
WREG32(mmSDMA1_CLK_CTRL, data);
} else {
temp = data = RREG32(mmSDMA0_CLK_CTRL);
data |= SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK;
if (data != temp)
WREG32(mmSDMA0_CLK_CTRL, data);
temp = data = RREG32(mmSDMA1_CLK_CTRL);
data |= SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK;
if (data != temp)
WREG32(mmSDMA1_CLK_CTRL, data);
}
}
static void fiji_update_sdma_medium_grain_light_sleep(
struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
if (enable) {
temp = data = RREG32(mmSDMA0_POWER_CNTL);
data |= SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
if (temp != data)
WREG32(mmSDMA0_POWER_CNTL, data);
temp = data = RREG32(mmSDMA1_POWER_CNTL);
data |= SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
if (temp != data)
WREG32(mmSDMA1_POWER_CNTL, data);
} else {
temp = data = RREG32(mmSDMA0_POWER_CNTL);
data &= ~SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
if (temp != data)
WREG32(mmSDMA0_POWER_CNTL, data);
temp = data = RREG32(mmSDMA1_POWER_CNTL);
data &= ~SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
if (temp != data)
WREG32(mmSDMA1_POWER_CNTL, data);
}
}
static int sdma_v3_0_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
switch (adev->asic_type) {
case CHIP_FIJI:
fiji_update_sdma_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
fiji_update_sdma_medium_grain_light_sleep(adev,
state == AMD_CG_STATE_GATE ? true : false);
break;
default:
break;
}
return 0;
}

View File

@ -24,7 +24,7 @@
#include <linux/firmware.h>
#include "drmP.h"
#include "amdgpu.h"
#include "tonga_smumgr.h"
#include "tonga_smum.h"
MODULE_FIRMWARE("amdgpu/tonga_smc.bin");

View File

@ -273,8 +273,14 @@ static void tonga_ih_set_rptr(struct amdgpu_device *adev)
static int tonga_ih_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int ret;
ret = amdgpu_irq_add_domain(adev);
if (ret)
return ret;
tonga_ih_set_interrupt_funcs(adev);
return 0;
}
@ -301,6 +307,7 @@ static int tonga_ih_sw_fini(void *handle)
amdgpu_irq_fini(adev);
amdgpu_ih_ring_fini(adev);
amdgpu_irq_add_domain(adev);
return 0;
}

View File

@ -1,198 +0,0 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef TONGA_PP_SMC_H
#define TONGA_PP_SMC_H
#pragma pack(push, 1)
#define PPSMC_SWSTATE_FLAG_DC 0x01
#define PPSMC_SWSTATE_FLAG_UVD 0x02
#define PPSMC_SWSTATE_FLAG_VCE 0x04
#define PPSMC_SWSTATE_FLAG_PCIE_X1 0x08
#define PPSMC_THERMAL_PROTECT_TYPE_INTERNAL 0x00
#define PPSMC_THERMAL_PROTECT_TYPE_EXTERNAL 0x01
#define PPSMC_THERMAL_PROTECT_TYPE_NONE 0xff
#define PPSMC_SYSTEMFLAG_GPIO_DC 0x01
#define PPSMC_SYSTEMFLAG_STEPVDDC 0x02
#define PPSMC_SYSTEMFLAG_GDDR5 0x04
#define PPSMC_SYSTEMFLAG_DISABLE_BABYSTEP 0x08
#define PPSMC_SYSTEMFLAG_REGULATOR_HOT 0x10
#define PPSMC_SYSTEMFLAG_REGULATOR_HOT_ANALOG 0x20
#define PPSMC_SYSTEMFLAG_12CHANNEL 0x40
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_MASK 0x07
#define PPSMC_EXTRAFLAGS_AC2DC_DONT_WAIT_FOR_VBLANK 0x08
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTODPMLOWSTATE 0x00
#define PPSMC_EXTRAFLAGS_AC2DC_ACTION_GOTOINITIALSTATE 0x01
#define PPSMC_EXTRAFLAGS_AC2DC_GPIO5_POLARITY_HIGH 0x10
#define PPSMC_EXTRAFLAGS_DRIVER_TO_GPIO17 0x20
#define PPSMC_EXTRAFLAGS_PCC_TO_GPIO17 0x40
#define PPSMC_DPM2FLAGS_TDPCLMP 0x01
#define PPSMC_DPM2FLAGS_PWRSHFT 0x02
#define PPSMC_DPM2FLAGS_OCP 0x04
#define PPSMC_DISPLAY_WATERMARK_LOW 0
#define PPSMC_DISPLAY_WATERMARK_HIGH 1
#define PPSMC_STATEFLAG_AUTO_PULSE_SKIP 0x01
#define PPSMC_STATEFLAG_POWERBOOST 0x02
#define PPSMC_STATEFLAG_PSKIP_ON_TDP_FAULT 0x04
#define PPSMC_STATEFLAG_POWERSHIFT 0x08
#define PPSMC_STATEFLAG_SLOW_READ_MARGIN 0x10
#define PPSMC_STATEFLAG_DEEPSLEEP_THROTTLE 0x20
#define PPSMC_STATEFLAG_DEEPSLEEP_BYPASS 0x40
#define FDO_MODE_HARDWARE 0
#define FDO_MODE_PIECE_WISE_LINEAR 1
enum FAN_CONTROL {
FAN_CONTROL_FUZZY,
FAN_CONTROL_TABLE
};
#define PPSMC_Result_OK ((uint16_t)0x01)
#define PPSMC_Result_NoMore ((uint16_t)0x02)
#define PPSMC_Result_NotNow ((uint16_t)0x03)
#define PPSMC_Result_Failed ((uint16_t)0xFF)
#define PPSMC_Result_UnknownCmd ((uint16_t)0xFE)
#define PPSMC_Result_UnknownVT ((uint16_t)0xFD)
typedef uint16_t PPSMC_Result;
#define PPSMC_isERROR(x) ((uint16_t)0x80 & (x))
#define PPSMC_MSG_Halt ((uint16_t)0x10)
#define PPSMC_MSG_Resume ((uint16_t)0x11)
#define PPSMC_MSG_EnableDPMLevel ((uint16_t)0x12)
#define PPSMC_MSG_ZeroLevelsDisabled ((uint16_t)0x13)
#define PPSMC_MSG_OneLevelsDisabled ((uint16_t)0x14)
#define PPSMC_MSG_TwoLevelsDisabled ((uint16_t)0x15)
#define PPSMC_MSG_EnableThermalInterrupt ((uint16_t)0x16)
#define PPSMC_MSG_RunningOnAC ((uint16_t)0x17)
#define PPSMC_MSG_LevelUp ((uint16_t)0x18)
#define PPSMC_MSG_LevelDown ((uint16_t)0x19)
#define PPSMC_MSG_ResetDPMCounters ((uint16_t)0x1a)
#define PPSMC_MSG_SwitchToSwState ((uint16_t)0x20)
#define PPSMC_MSG_SwitchToSwStateLast ((uint16_t)0x3f)
#define PPSMC_MSG_SwitchToInitialState ((uint16_t)0x40)
#define PPSMC_MSG_NoForcedLevel ((uint16_t)0x41)
#define PPSMC_MSG_ForceHigh ((uint16_t)0x42)
#define PPSMC_MSG_ForceMediumOrHigh ((uint16_t)0x43)
#define PPSMC_MSG_SwitchToMinimumPower ((uint16_t)0x51)
#define PPSMC_MSG_ResumeFromMinimumPower ((uint16_t)0x52)
#define PPSMC_MSG_EnableCac ((uint16_t)0x53)
#define PPSMC_MSG_DisableCac ((uint16_t)0x54)
#define PPSMC_DPMStateHistoryStart ((uint16_t)0x55)
#define PPSMC_DPMStateHistoryStop ((uint16_t)0x56)
#define PPSMC_CACHistoryStart ((uint16_t)0x57)
#define PPSMC_CACHistoryStop ((uint16_t)0x58)
#define PPSMC_TDPClampingActive ((uint16_t)0x59)
#define PPSMC_TDPClampingInactive ((uint16_t)0x5A)
#define PPSMC_StartFanControl ((uint16_t)0x5B)
#define PPSMC_StopFanControl ((uint16_t)0x5C)
#define PPSMC_NoDisplay ((uint16_t)0x5D)
#define PPSMC_HasDisplay ((uint16_t)0x5E)
#define PPSMC_MSG_UVDPowerOFF ((uint16_t)0x60)
#define PPSMC_MSG_UVDPowerON ((uint16_t)0x61)
#define PPSMC_MSG_EnableULV ((uint16_t)0x62)
#define PPSMC_MSG_DisableULV ((uint16_t)0x63)
#define PPSMC_MSG_EnterULV ((uint16_t)0x64)
#define PPSMC_MSG_ExitULV ((uint16_t)0x65)
#define PPSMC_PowerShiftActive ((uint16_t)0x6A)
#define PPSMC_PowerShiftInactive ((uint16_t)0x6B)
#define PPSMC_OCPActive ((uint16_t)0x6C)
#define PPSMC_OCPInactive ((uint16_t)0x6D)
#define PPSMC_CACLongTermAvgEnable ((uint16_t)0x6E)
#define PPSMC_CACLongTermAvgDisable ((uint16_t)0x6F)
#define PPSMC_MSG_InferredStateSweep_Start ((uint16_t)0x70)
#define PPSMC_MSG_InferredStateSweep_Stop ((uint16_t)0x71)
#define PPSMC_MSG_SwitchToLowestInfState ((uint16_t)0x72)
#define PPSMC_MSG_SwitchToNonInfState ((uint16_t)0x73)
#define PPSMC_MSG_AllStateSweep_Start ((uint16_t)0x74)
#define PPSMC_MSG_AllStateSweep_Stop ((uint16_t)0x75)
#define PPSMC_MSG_SwitchNextLowerInfState ((uint16_t)0x76)
#define PPSMC_MSG_SwitchNextHigherInfState ((uint16_t)0x77)
#define PPSMC_MSG_MclkRetrainingTest ((uint16_t)0x78)
#define PPSMC_MSG_ForceTDPClamping ((uint16_t)0x79)
#define PPSMC_MSG_CollectCAC_PowerCorreln ((uint16_t)0x7A)
#define PPSMC_MSG_CollectCAC_WeightCalib ((uint16_t)0x7B)
#define PPSMC_MSG_CollectCAC_SQonly ((uint16_t)0x7C)
#define PPSMC_MSG_CollectCAC_TemperaturePwr ((uint16_t)0x7D)
#define PPSMC_MSG_ExtremitiesTest_Start ((uint16_t)0x7E)
#define PPSMC_MSG_ExtremitiesTest_Stop ((uint16_t)0x7F)
#define PPSMC_FlushDataCache ((uint16_t)0x80)
#define PPSMC_FlushInstrCache ((uint16_t)0x81)
#define PPSMC_MSG_SetEnabledLevels ((uint16_t)0x82)
#define PPSMC_MSG_SetForcedLevels ((uint16_t)0x83)
#define PPSMC_MSG_ResetToDefaults ((uint16_t)0x84)
#define PPSMC_MSG_SetForcedLevelsAndJump ((uint16_t)0x85)
#define PPSMC_MSG_SetCACHistoryMode ((uint16_t)0x86)
#define PPSMC_MSG_EnableDTE ((uint16_t)0x87)
#define PPSMC_MSG_DisableDTE ((uint16_t)0x88)
#define PPSMC_MSG_SmcSpaceSetAddress ((uint16_t)0x89)
#define PPSMC_MSG_SmcSpaceWriteDWordInc ((uint16_t)0x8A)
#define PPSMC_MSG_SmcSpaceWriteWordInc ((uint16_t)0x8B)
#define PPSMC_MSG_SmcSpaceWriteByteInc ((uint16_t)0x8C)
#define PPSMC_MSG_ChangeNearTDPLimit ((uint16_t)0x90)
#define PPSMC_MSG_ChangeSafePowerLimit ((uint16_t)0x91)
#define PPSMC_MSG_DPMStateSweepStart ((uint16_t)0x92)
#define PPSMC_MSG_DPMStateSweepStop ((uint16_t)0x93)
#define PPSMC_MSG_OVRDDisableSCLKDS ((uint16_t)0x94)
#define PPSMC_MSG_CancelDisableOVRDSCLKDS ((uint16_t)0x95)
#define PPSMC_MSG_ThrottleOVRDSCLKDS ((uint16_t)0x96)
#define PPSMC_MSG_CancelThrottleOVRDSCLKDS ((uint16_t)0x97)
#define PPSMC_MSG_GPIO17 ((uint16_t)0x98)
#define PPSMC_MSG_API_SetSvi2Volt_Vddc ((uint16_t)0x99)
#define PPSMC_MSG_API_SetSvi2Volt_Vddci ((uint16_t)0x9A)
#define PPSMC_MSG_API_SetSvi2Volt_Mvdd ((uint16_t)0x9B)
#define PPSMC_MSG_API_GetSvi2Volt_Vddc ((uint16_t)0x9C)
#define PPSMC_MSG_API_GetSvi2Volt_Vddci ((uint16_t)0x9D)
#define PPSMC_MSG_API_GetSvi2Volt_Mvdd ((uint16_t)0x9E)
#define PPSMC_MSG_BREAK ((uint16_t)0xF8)
#define PPSMC_MSG_Test ((uint16_t)0x100)
#define PPSMC_MSG_DRV_DRAM_ADDR_HI ((uint16_t)0x250)
#define PPSMC_MSG_DRV_DRAM_ADDR_LO ((uint16_t)0x251)
#define PPSMC_MSG_SMU_DRAM_ADDR_HI ((uint16_t)0x252)
#define PPSMC_MSG_SMU_DRAM_ADDR_LO ((uint16_t)0x253)
#define PPSMC_MSG_LoadUcodes ((uint16_t)0x254)
typedef uint16_t PPSMC_Msg;
#define PPSMC_EVENT_STATUS_THERMAL 0x00000001
#define PPSMC_EVENT_STATUS_REGULATORHOT 0x00000002
#define PPSMC_EVENT_STATUS_DC 0x00000004
#define PPSMC_EVENT_STATUS_GPIO17 0x00000008
#pragma pack(pop)
#endif

View File

@ -25,7 +25,7 @@
#include "drmP.h"
#include "amdgpu.h"
#include "tonga_ppsmc.h"
#include "tonga_smumgr.h"
#include "tonga_smum.h"
#include "smu_ucode_xfer_vi.h"
#include "amdgpu_ucode.h"

View File

@ -279,6 +279,234 @@ static void uvd_v6_0_mc_resume(struct amdgpu_device *adev)
WREG32(mmUVD_VCPU_CACHE_SIZE2, size);
}
static void cz_set_uvd_clock_gating_branches(struct amdgpu_device *adev,
bool enable)
{
u32 data, data1;
data = RREG32(mmUVD_CGC_GATE);
data1 = RREG32(mmUVD_SUVD_CGC_GATE);
if (enable) {
data |= UVD_CGC_GATE__SYS_MASK |
UVD_CGC_GATE__UDEC_MASK |
UVD_CGC_GATE__MPEG2_MASK |
UVD_CGC_GATE__RBC_MASK |
UVD_CGC_GATE__LMI_MC_MASK |
UVD_CGC_GATE__IDCT_MASK |
UVD_CGC_GATE__MPRD_MASK |
UVD_CGC_GATE__MPC_MASK |
UVD_CGC_GATE__LBSI_MASK |
UVD_CGC_GATE__LRBBM_MASK |
UVD_CGC_GATE__UDEC_RE_MASK |
UVD_CGC_GATE__UDEC_CM_MASK |
UVD_CGC_GATE__UDEC_IT_MASK |
UVD_CGC_GATE__UDEC_DB_MASK |
UVD_CGC_GATE__UDEC_MP_MASK |
UVD_CGC_GATE__WCB_MASK |
UVD_CGC_GATE__VCPU_MASK |
UVD_CGC_GATE__SCPU_MASK;
data1 |= UVD_SUVD_CGC_GATE__SRE_MASK |
UVD_SUVD_CGC_GATE__SIT_MASK |
UVD_SUVD_CGC_GATE__SMP_MASK |
UVD_SUVD_CGC_GATE__SCM_MASK |
UVD_SUVD_CGC_GATE__SDB_MASK |
UVD_SUVD_CGC_GATE__SRE_H264_MASK |
UVD_SUVD_CGC_GATE__SRE_HEVC_MASK |
UVD_SUVD_CGC_GATE__SIT_H264_MASK |
UVD_SUVD_CGC_GATE__SIT_HEVC_MASK |
UVD_SUVD_CGC_GATE__SCM_H264_MASK |
UVD_SUVD_CGC_GATE__SCM_HEVC_MASK |
UVD_SUVD_CGC_GATE__SDB_H264_MASK |
UVD_SUVD_CGC_GATE__SDB_HEVC_MASK;
} else {
data &= ~(UVD_CGC_GATE__SYS_MASK |
UVD_CGC_GATE__UDEC_MASK |
UVD_CGC_GATE__MPEG2_MASK |
UVD_CGC_GATE__RBC_MASK |
UVD_CGC_GATE__LMI_MC_MASK |
UVD_CGC_GATE__LMI_UMC_MASK |
UVD_CGC_GATE__IDCT_MASK |
UVD_CGC_GATE__MPRD_MASK |
UVD_CGC_GATE__MPC_MASK |
UVD_CGC_GATE__LBSI_MASK |
UVD_CGC_GATE__LRBBM_MASK |
UVD_CGC_GATE__UDEC_RE_MASK |
UVD_CGC_GATE__UDEC_CM_MASK |
UVD_CGC_GATE__UDEC_IT_MASK |
UVD_CGC_GATE__UDEC_DB_MASK |
UVD_CGC_GATE__UDEC_MP_MASK |
UVD_CGC_GATE__WCB_MASK |
UVD_CGC_GATE__VCPU_MASK |
UVD_CGC_GATE__SCPU_MASK);
data1 &= ~(UVD_SUVD_CGC_GATE__SRE_MASK |
UVD_SUVD_CGC_GATE__SIT_MASK |
UVD_SUVD_CGC_GATE__SMP_MASK |
UVD_SUVD_CGC_GATE__SCM_MASK |
UVD_SUVD_CGC_GATE__SDB_MASK |
UVD_SUVD_CGC_GATE__SRE_H264_MASK |
UVD_SUVD_CGC_GATE__SRE_HEVC_MASK |
UVD_SUVD_CGC_GATE__SIT_H264_MASK |
UVD_SUVD_CGC_GATE__SIT_HEVC_MASK |
UVD_SUVD_CGC_GATE__SCM_H264_MASK |
UVD_SUVD_CGC_GATE__SCM_HEVC_MASK |
UVD_SUVD_CGC_GATE__SDB_H264_MASK |
UVD_SUVD_CGC_GATE__SDB_HEVC_MASK);
}
WREG32(mmUVD_CGC_GATE, data);
WREG32(mmUVD_SUVD_CGC_GATE, data1);
}
static void tonga_set_uvd_clock_gating_branches(struct amdgpu_device *adev,
bool enable)
{
u32 data, data1;
data = RREG32(mmUVD_CGC_GATE);
data1 = RREG32(mmUVD_SUVD_CGC_GATE);
if (enable) {
data |= UVD_CGC_GATE__SYS_MASK |
UVD_CGC_GATE__UDEC_MASK |
UVD_CGC_GATE__MPEG2_MASK |
UVD_CGC_GATE__RBC_MASK |
UVD_CGC_GATE__LMI_MC_MASK |
UVD_CGC_GATE__IDCT_MASK |
UVD_CGC_GATE__MPRD_MASK |
UVD_CGC_GATE__MPC_MASK |
UVD_CGC_GATE__LBSI_MASK |
UVD_CGC_GATE__LRBBM_MASK |
UVD_CGC_GATE__UDEC_RE_MASK |
UVD_CGC_GATE__UDEC_CM_MASK |
UVD_CGC_GATE__UDEC_IT_MASK |
UVD_CGC_GATE__UDEC_DB_MASK |
UVD_CGC_GATE__UDEC_MP_MASK |
UVD_CGC_GATE__WCB_MASK |
UVD_CGC_GATE__VCPU_MASK |
UVD_CGC_GATE__SCPU_MASK;
data1 |= UVD_SUVD_CGC_GATE__SRE_MASK |
UVD_SUVD_CGC_GATE__SIT_MASK |
UVD_SUVD_CGC_GATE__SMP_MASK |
UVD_SUVD_CGC_GATE__SCM_MASK |
UVD_SUVD_CGC_GATE__SDB_MASK;
} else {
data &= ~(UVD_CGC_GATE__SYS_MASK |
UVD_CGC_GATE__UDEC_MASK |
UVD_CGC_GATE__MPEG2_MASK |
UVD_CGC_GATE__RBC_MASK |
UVD_CGC_GATE__LMI_MC_MASK |
UVD_CGC_GATE__LMI_UMC_MASK |
UVD_CGC_GATE__IDCT_MASK |
UVD_CGC_GATE__MPRD_MASK |
UVD_CGC_GATE__MPC_MASK |
UVD_CGC_GATE__LBSI_MASK |
UVD_CGC_GATE__LRBBM_MASK |
UVD_CGC_GATE__UDEC_RE_MASK |
UVD_CGC_GATE__UDEC_CM_MASK |
UVD_CGC_GATE__UDEC_IT_MASK |
UVD_CGC_GATE__UDEC_DB_MASK |
UVD_CGC_GATE__UDEC_MP_MASK |
UVD_CGC_GATE__WCB_MASK |
UVD_CGC_GATE__VCPU_MASK |
UVD_CGC_GATE__SCPU_MASK);
data1 &= ~(UVD_SUVD_CGC_GATE__SRE_MASK |
UVD_SUVD_CGC_GATE__SIT_MASK |
UVD_SUVD_CGC_GATE__SMP_MASK |
UVD_SUVD_CGC_GATE__SCM_MASK |
UVD_SUVD_CGC_GATE__SDB_MASK);
}
WREG32(mmUVD_CGC_GATE, data);
WREG32(mmUVD_SUVD_CGC_GATE, data1);
}
static void uvd_v6_0_set_uvd_dynamic_clock_mode(struct amdgpu_device *adev,
bool swmode)
{
u32 data, data1 = 0, data2;
/* Always un-gate UVD REGS bit */
data = RREG32(mmUVD_CGC_GATE);
data &= ~(UVD_CGC_GATE__REGS_MASK);
WREG32(mmUVD_CGC_GATE, data);
data = RREG32(mmUVD_CGC_CTRL);
data &= ~(UVD_CGC_CTRL__CLK_OFF_DELAY_MASK |
UVD_CGC_CTRL__CLK_GATE_DLY_TIMER_MASK);
data |= UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK |
1 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_GATE_DLY_TIMER) |
4 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_OFF_DELAY);
data2 = RREG32(mmUVD_SUVD_CGC_CTRL);
if (swmode) {
data &= ~(UVD_CGC_CTRL__UDEC_RE_MODE_MASK |
UVD_CGC_CTRL__UDEC_CM_MODE_MASK |
UVD_CGC_CTRL__UDEC_IT_MODE_MASK |
UVD_CGC_CTRL__UDEC_DB_MODE_MASK |
UVD_CGC_CTRL__UDEC_MP_MODE_MASK |
UVD_CGC_CTRL__SYS_MODE_MASK |
UVD_CGC_CTRL__UDEC_MODE_MASK |
UVD_CGC_CTRL__MPEG2_MODE_MASK |
UVD_CGC_CTRL__REGS_MODE_MASK |
UVD_CGC_CTRL__RBC_MODE_MASK |
UVD_CGC_CTRL__LMI_MC_MODE_MASK |
UVD_CGC_CTRL__LMI_UMC_MODE_MASK |
UVD_CGC_CTRL__IDCT_MODE_MASK |
UVD_CGC_CTRL__MPRD_MODE_MASK |
UVD_CGC_CTRL__MPC_MODE_MASK |
UVD_CGC_CTRL__LBSI_MODE_MASK |
UVD_CGC_CTRL__LRBBM_MODE_MASK |
UVD_CGC_CTRL__WCB_MODE_MASK |
UVD_CGC_CTRL__VCPU_MODE_MASK |
UVD_CGC_CTRL__JPEG_MODE_MASK |
UVD_CGC_CTRL__SCPU_MODE_MASK);
data1 |= UVD_CGC_CTRL2__DYN_OCLK_RAMP_EN_MASK |
UVD_CGC_CTRL2__DYN_RCLK_RAMP_EN_MASK;
data1 &= ~UVD_CGC_CTRL2__GATER_DIV_ID_MASK;
data1 |= 7 << REG_FIELD_SHIFT(UVD_CGC_CTRL2, GATER_DIV_ID);
data2 &= ~(UVD_SUVD_CGC_CTRL__SRE_MODE_MASK |
UVD_SUVD_CGC_CTRL__SIT_MODE_MASK |
UVD_SUVD_CGC_CTRL__SMP_MODE_MASK |
UVD_SUVD_CGC_CTRL__SCM_MODE_MASK |
UVD_SUVD_CGC_CTRL__SDB_MODE_MASK);
} else {
data |= UVD_CGC_CTRL__UDEC_RE_MODE_MASK |
UVD_CGC_CTRL__UDEC_CM_MODE_MASK |
UVD_CGC_CTRL__UDEC_IT_MODE_MASK |
UVD_CGC_CTRL__UDEC_DB_MODE_MASK |
UVD_CGC_CTRL__UDEC_MP_MODE_MASK |
UVD_CGC_CTRL__SYS_MODE_MASK |
UVD_CGC_CTRL__UDEC_MODE_MASK |
UVD_CGC_CTRL__MPEG2_MODE_MASK |
UVD_CGC_CTRL__REGS_MODE_MASK |
UVD_CGC_CTRL__RBC_MODE_MASK |
UVD_CGC_CTRL__LMI_MC_MODE_MASK |
UVD_CGC_CTRL__LMI_UMC_MODE_MASK |
UVD_CGC_CTRL__IDCT_MODE_MASK |
UVD_CGC_CTRL__MPRD_MODE_MASK |
UVD_CGC_CTRL__MPC_MODE_MASK |
UVD_CGC_CTRL__LBSI_MODE_MASK |
UVD_CGC_CTRL__LRBBM_MODE_MASK |
UVD_CGC_CTRL__WCB_MODE_MASK |
UVD_CGC_CTRL__VCPU_MODE_MASK |
UVD_CGC_CTRL__SCPU_MODE_MASK;
data2 |= UVD_SUVD_CGC_CTRL__SRE_MODE_MASK |
UVD_SUVD_CGC_CTRL__SIT_MODE_MASK |
UVD_SUVD_CGC_CTRL__SMP_MODE_MASK |
UVD_SUVD_CGC_CTRL__SCM_MODE_MASK |
UVD_SUVD_CGC_CTRL__SDB_MODE_MASK;
}
WREG32(mmUVD_CGC_CTRL, data);
WREG32(mmUVD_SUVD_CGC_CTRL, data2);
data = RREG32_UVD_CTX(ixUVD_CGC_CTRL2);
data &= ~(REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_OCLK_RAMP_EN) |
REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_RCLK_RAMP_EN) |
REG_FIELD_MASK(UVD_CGC_CTRL2, GATER_DIV_ID));
data1 &= (REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_OCLK_RAMP_EN) |
REG_FIELD_MASK(UVD_CGC_CTRL2, DYN_RCLK_RAMP_EN) |
REG_FIELD_MASK(UVD_CGC_CTRL2, GATER_DIV_ID));
data |= data1;
WREG32_UVD_CTX(ixUVD_CGC_CTRL2, data);
}
/**
* uvd_v6_0_start - start UVD block
*
@ -303,8 +531,19 @@ static int uvd_v6_0_start(struct amdgpu_device *adev)
uvd_v6_0_mc_resume(adev);
/* disable clock gating */
WREG32(mmUVD_CGC_GATE, 0);
/* Set dynamic clock gating in S/W control mode */
if (adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG) {
if (adev->flags & AMD_IS_APU)
cz_set_uvd_clock_gating_branches(adev, false);
else
tonga_set_uvd_clock_gating_branches(adev, false);
uvd_v6_0_set_uvd_dynamic_clock_mode(adev, true);
} else {
/* disable clock gating */
uint32_t data = RREG32(mmUVD_CGC_CTRL);
data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
WREG32(mmUVD_CGC_CTRL, data);
}
/* disable interupt */
WREG32_P(mmUVD_MASTINT_EN, 0, ~(1 << 1));
@ -758,6 +997,24 @@ static int uvd_v6_0_process_interrupt(struct amdgpu_device *adev,
static int uvd_v6_0_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG))
return 0;
if (enable) {
if (adev->flags & AMD_IS_APU)
cz_set_uvd_clock_gating_branches(adev, enable);
else
tonga_set_uvd_clock_gating_branches(adev, enable);
uvd_v6_0_set_uvd_dynamic_clock_mode(adev, true);
} else {
uint32_t data = RREG32(mmUVD_CGC_CTRL);
data &= ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK;
WREG32(mmUVD_CGC_CTRL, data);
}
return 0;
}

View File

@ -103,6 +103,108 @@ static void vce_v3_0_ring_set_wptr(struct amdgpu_ring *ring)
WREG32(mmVCE_RB_WPTR2, ring->wptr);
}
static void vce_v3_0_override_vce_clock_gating(struct amdgpu_device *adev, bool override)
{
u32 tmp, data;
tmp = data = RREG32(mmVCE_RB_ARB_CTRL);
if (override)
data |= VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
else
data &= ~VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
if (tmp != data)
WREG32(mmVCE_RB_ARB_CTRL, data);
}
static void vce_v3_0_set_vce_sw_clock_gating(struct amdgpu_device *adev,
bool gated)
{
u32 tmp, data;
/* Set Override to disable Clock Gating */
vce_v3_0_override_vce_clock_gating(adev, true);
if (!gated) {
/* Force CLOCK ON for VCE_CLOCK_GATING_B,
* {*_FORCE_ON, *_FORCE_OFF} = {1, 0}
* VREG can be FORCE ON or set to Dynamic, but can't be OFF
*/
tmp = data = RREG32(mmVCE_CLOCK_GATING_B);
data |= 0x1ff;
data &= ~0xef0000;
if (tmp != data)
WREG32(mmVCE_CLOCK_GATING_B, data);
/* Force CLOCK ON for VCE_UENC_CLOCK_GATING,
* {*_FORCE_ON, *_FORCE_OFF} = {1, 0}
*/
tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING);
data |= 0x3ff000;
data &= ~0xffc00000;
if (tmp != data)
WREG32(mmVCE_UENC_CLOCK_GATING, data);
/* set VCE_UENC_CLOCK_GATING_2 */
tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING_2);
data |= 0x2;
data &= ~0x2;
if (tmp != data)
WREG32(mmVCE_UENC_CLOCK_GATING_2, data);
/* Force CLOCK ON for VCE_UENC_REG_CLOCK_GATING */
tmp = data = RREG32(mmVCE_UENC_REG_CLOCK_GATING);
data |= 0x37f;
if (tmp != data)
WREG32(mmVCE_UENC_REG_CLOCK_GATING, data);
/* Force VCE_UENC_DMA_DCLK_CTRL Clock ON */
tmp = data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL);
data |= VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK |
0x8;
if (tmp != data)
WREG32(mmVCE_UENC_DMA_DCLK_CTRL, data);
} else {
/* Force CLOCK OFF for VCE_CLOCK_GATING_B,
* {*, *_FORCE_OFF} = {*, 1}
* set VREG to Dynamic, as it can't be OFF
*/
tmp = data = RREG32(mmVCE_CLOCK_GATING_B);
data &= ~0x80010;
data |= 0xe70008;
if (tmp != data)
WREG32(mmVCE_CLOCK_GATING_B, data);
/* Force CLOCK OFF for VCE_UENC_CLOCK_GATING,
* Force ClOCK OFF takes precedent over Force CLOCK ON setting.
* {*_FORCE_ON, *_FORCE_OFF} = {*, 1}
*/
tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING);
data |= 0xffc00000;
if (tmp != data)
WREG32(mmVCE_UENC_CLOCK_GATING, data);
/* Set VCE_UENC_CLOCK_GATING_2 */
tmp = data = RREG32(mmVCE_UENC_CLOCK_GATING_2);
data |= 0x10000;
if (tmp != data)
WREG32(mmVCE_UENC_CLOCK_GATING_2, data);
/* Set VCE_UENC_REG_CLOCK_GATING to dynamic */
tmp = data = RREG32(mmVCE_UENC_REG_CLOCK_GATING);
data &= ~0xffc00000;
if (tmp != data)
WREG32(mmVCE_UENC_REG_CLOCK_GATING, data);
/* Set VCE_UENC_DMA_DCLK_CTRL CG always in dynamic mode */
tmp = data = RREG32(mmVCE_UENC_DMA_DCLK_CTRL);
data &= ~(VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK |
0x8);
if (tmp != data)
WREG32(mmVCE_UENC_DMA_DCLK_CTRL, data);
}
vce_v3_0_override_vce_clock_gating(adev, false);
}
/**
* vce_v3_0_start - start VCE block
*
@ -121,7 +223,7 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
if (adev->vce.harvest_config & (1 << idx))
continue;
if(idx == 0)
if (idx == 0)
WREG32_P(mmGRBM_GFX_INDEX, 0,
~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
else
@ -174,6 +276,10 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
/* clear BUSY flag */
WREG32_P(mmVCE_STATUS, 0, ~1);
/* Set Clock-Gating off */
if (adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG)
vce_v3_0_set_vce_sw_clock_gating(adev, false);
if (r) {
DRM_ERROR("VCE not responding, giving up!!!\n");
mutex_unlock(&adev->grbm_idx_mutex);
@ -208,14 +314,11 @@ static int vce_v3_0_start(struct amdgpu_device *adev)
static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
{
u32 tmp;
unsigned ret;
/* Fiji, Stoney are single pipe */
if ((adev->asic_type == CHIP_FIJI) ||
(adev->asic_type == CHIP_STONEY)){
ret = AMDGPU_VCE_HARVEST_VCE1;
return ret;
}
(adev->asic_type == CHIP_STONEY))
return AMDGPU_VCE_HARVEST_VCE1;
/* Tonga and CZ are dual or single pipe */
if (adev->flags & AMD_IS_APU)
@ -229,19 +332,14 @@ static unsigned vce_v3_0_get_harvest_config(struct amdgpu_device *adev)
switch (tmp) {
case 1:
ret = AMDGPU_VCE_HARVEST_VCE0;
break;
return AMDGPU_VCE_HARVEST_VCE0;
case 2:
ret = AMDGPU_VCE_HARVEST_VCE1;
break;
return AMDGPU_VCE_HARVEST_VCE1;
case 3:
ret = AMDGPU_VCE_HARVEST_VCE0 | AMDGPU_VCE_HARVEST_VCE1;
break;
return AMDGPU_VCE_HARVEST_VCE0 | AMDGPU_VCE_HARVEST_VCE1;
default:
ret = 0;
return 0;
}
return ret;
}
static int vce_v3_0_early_init(void *handle)
@ -316,28 +414,22 @@ static int vce_v3_0_sw_fini(void *handle)
static int vce_v3_0_hw_init(void *handle)
{
struct amdgpu_ring *ring;
int r;
int r, i;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
r = vce_v3_0_start(adev);
if (r)
return r;
ring = &adev->vce.ring[0];
ring->ready = true;
r = amdgpu_ring_test_ring(ring);
if (r) {
ring->ready = false;
return r;
}
adev->vce.ring[0].ready = false;
adev->vce.ring[1].ready = false;
ring = &adev->vce.ring[1];
ring->ready = true;
r = amdgpu_ring_test_ring(ring);
if (r) {
ring->ready = false;
return r;
for (i = 0; i < 2; i++) {
r = amdgpu_ring_test_ring(&adev->vce.ring[i]);
if (r)
return r;
else
adev->vce.ring[i].ready = true;
}
DRM_INFO("VCE initialized successfully.\n");
@ -437,17 +529,9 @@ static bool vce_v3_0_is_idle(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
u32 mask = 0;
int idx;
for (idx = 0; idx < 2; ++idx) {
if (adev->vce.harvest_config & (1 << idx))
continue;
if (idx == 0)
mask |= SRBM_STATUS2__VCE0_BUSY_MASK;
else
mask |= SRBM_STATUS2__VCE1_BUSY_MASK;
}
mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE0) ? 0 : SRBM_STATUS2__VCE0_BUSY_MASK;
mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE1) ? 0 : SRBM_STATUS2__VCE1_BUSY_MASK;
return !(RREG32(mmSRBM_STATUS2) & mask);
}
@ -456,23 +540,11 @@ static int vce_v3_0_wait_for_idle(void *handle)
{
unsigned i;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
u32 mask = 0;
int idx;
for (idx = 0; idx < 2; ++idx) {
if (adev->vce.harvest_config & (1 << idx))
continue;
if (idx == 0)
mask |= SRBM_STATUS2__VCE0_BUSY_MASK;
else
mask |= SRBM_STATUS2__VCE1_BUSY_MASK;
}
for (i = 0; i < adev->usec_timeout; i++) {
if (!(RREG32(mmSRBM_STATUS2) & mask))
for (i = 0; i < adev->usec_timeout; i++)
if (vce_v3_0_is_idle(handle))
return 0;
}
return -ETIMEDOUT;
}
@ -480,17 +552,10 @@ static int vce_v3_0_soft_reset(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
u32 mask = 0;
int idx;
for (idx = 0; idx < 2; ++idx) {
if (adev->vce.harvest_config & (1 << idx))
continue;
mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE0) ? 0 : SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK;
mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE1) ? 0 : SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK;
if (idx == 0)
mask |= SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK;
else
mask |= SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK;
}
WREG32_P(mmSRBM_SOFT_RESET, mask,
~(SRBM_SOFT_RESET__SOFT_RESET_VCE0_MASK |
SRBM_SOFT_RESET__SOFT_RESET_VCE1_MASK));
@ -592,10 +657,8 @@ static int vce_v3_0_process_interrupt(struct amdgpu_device *adev,
switch (entry->src_data) {
case 0:
amdgpu_fence_process(&adev->vce.ring[0]);
break;
case 1:
amdgpu_fence_process(&adev->vce.ring[1]);
amdgpu_fence_process(&adev->vce.ring[entry->src_data]);
break;
default:
DRM_ERROR("Unhandled interrupt: %d %d\n",
@ -609,6 +672,47 @@ static int vce_v3_0_process_interrupt(struct amdgpu_device *adev,
static int vce_v3_0_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
int i;
if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG))
return 0;
mutex_lock(&adev->grbm_idx_mutex);
for (i = 0; i < 2; i++) {
/* Program VCE Instance 0 or 1 if not harvested */
if (adev->vce.harvest_config & (1 << i))
continue;
if (i == 0)
WREG32_P(mmGRBM_GFX_INDEX, 0,
~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
else
WREG32_P(mmGRBM_GFX_INDEX,
GRBM_GFX_INDEX__VCE_INSTANCE_MASK,
~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
if (enable) {
/* initialize VCE_CLOCK_GATING_A: Clock ON/OFF delay */
uint32_t data = RREG32(mmVCE_CLOCK_GATING_A);
data &= ~(0xf | 0xff0);
data |= ((0x0 << 0) | (0x04 << 4));
WREG32(mmVCE_CLOCK_GATING_A, data);
/* initialize VCE_UENC_CLOCK_GATING: Clock ON/OFF delay */
data = RREG32(mmVCE_UENC_CLOCK_GATING);
data &= ~(0xf | 0xff0);
data |= ((0x0 << 0) | (0x04 << 4));
WREG32(mmVCE_UENC_CLOCK_GATING, data);
}
vce_v3_0_set_vce_sw_clock_gating(adev, enable);
}
WREG32_P(mmGRBM_GFX_INDEX, 0, ~GRBM_GFX_INDEX__VCE_INSTANCE_MASK);
mutex_unlock(&adev->grbm_idx_mutex);
return 0;
}

View File

@ -31,6 +31,7 @@
#include "amdgpu_vce.h"
#include "amdgpu_ucode.h"
#include "atom.h"
#include "amd_pcie.h"
#include "gmc/gmc_8_1_d.h"
#include "gmc/gmc_8_1_sh_mask.h"
@ -71,6 +72,7 @@
#include "uvd_v5_0.h"
#include "uvd_v6_0.h"
#include "vce_v3_0.h"
#include "amdgpu_powerplay.h"
/*
* Indirect registers accessor
@ -376,6 +378,38 @@ static bool vi_read_disabled_bios(struct amdgpu_device *adev)
WREG32_SMC(ixROM_CNTL, rom_cntl);
return r;
}
static bool vi_read_bios_from_rom(struct amdgpu_device *adev,
u8 *bios, u32 length_bytes)
{
u32 *dw_ptr;
unsigned long flags;
u32 i, length_dw;
if (bios == NULL)
return false;
if (length_bytes == 0)
return false;
/* APU vbios image is part of sbios image */
if (adev->flags & AMD_IS_APU)
return false;
dw_ptr = (u32 *)bios;
length_dw = ALIGN(length_bytes, 4) / 4;
/* take the smc lock since we are using the smc index */
spin_lock_irqsave(&adev->smc_idx_lock, flags);
/* set rom index to 0 */
WREG32(mmSMC_IND_INDEX_0, ixROM_INDEX);
WREG32(mmSMC_IND_DATA_0, 0);
/* set index to data for continous read */
WREG32(mmSMC_IND_INDEX_0, ixROM_DATA);
for (i = 0; i < length_dw; i++)
dw_ptr[i] = RREG32(mmSMC_IND_DATA_0);
spin_unlock_irqrestore(&adev->smc_idx_lock, flags);
return true;
}
static struct amdgpu_allowed_register_entry tonga_allowed_read_registers[] = {
{mmGB_MACROTILE_MODE7, true},
};
@ -1019,9 +1053,6 @@ static int vi_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
static void vi_pcie_gen3_enable(struct amdgpu_device *adev)
{
u32 mask;
int ret;
if (pci_is_root_bus(adev->pdev->bus))
return;
@ -1031,11 +1062,8 @@ static void vi_pcie_gen3_enable(struct amdgpu_device *adev)
if (adev->flags & AMD_IS_APU)
return;
ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
if (ret != 0)
return;
if (!(mask & (DRM_PCIE_SPEED_50 | DRM_PCIE_SPEED_80)))
if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
return;
/* todo */
@ -1098,7 +1126,7 @@ static const struct amdgpu_ip_block_version topaz_ip_blocks[] =
.major = 7,
.minor = 1,
.rev = 0,
.funcs = &iceland_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_GFX,
@ -1145,7 +1173,7 @@ static const struct amdgpu_ip_block_version tonga_ip_blocks[] =
.major = 7,
.minor = 1,
.rev = 0,
.funcs = &tonga_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -1213,7 +1241,7 @@ static const struct amdgpu_ip_block_version fiji_ip_blocks[] =
.major = 7,
.minor = 1,
.rev = 0,
.funcs = &fiji_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs,
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -1281,7 +1309,7 @@ static const struct amdgpu_ip_block_version cz_ip_blocks[] =
.major = 8,
.minor = 0,
.rev = 0,
.funcs = &cz_dpm_ip_funcs,
.funcs = &amdgpu_pp_ip_funcs
},
{
.type = AMD_IP_BLOCK_TYPE_DCE,
@ -1354,20 +1382,18 @@ int vi_set_ip_blocks(struct amdgpu_device *adev)
static uint32_t vi_get_rev_id(struct amdgpu_device *adev)
{
if (adev->asic_type == CHIP_TOPAZ)
return (RREG32(mmPCIE_EFUSE4) & PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID_MASK)
>> PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID__SHIFT;
else if (adev->flags & AMD_IS_APU)
if (adev->flags & AMD_IS_APU)
return (RREG32_SMC(ATI_REV_ID_FUSE_MACRO__ADDRESS) & ATI_REV_ID_FUSE_MACRO__MASK)
>> ATI_REV_ID_FUSE_MACRO__SHIFT;
else
return (RREG32(mmCC_DRM_ID_STRAPS) & CC_DRM_ID_STRAPS__ATI_REV_ID_MASK)
>> CC_DRM_ID_STRAPS__ATI_REV_ID__SHIFT;
return (RREG32(mmPCIE_EFUSE4) & PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID_MASK)
>> PCIE_EFUSE4__STRAP_BIF_ATI_REV_ID__SHIFT;
}
static const struct amdgpu_asic_funcs vi_asic_funcs =
{
.read_disabled_bios = &vi_read_disabled_bios,
.read_bios_from_rom = &vi_read_bios_from_rom,
.read_register = &vi_read_register,
.reset = &vi_asic_reset,
.set_vga_state = &vi_vga_set_state,
@ -1416,7 +1442,8 @@ static int vi_common_early_init(void *handle)
break;
case CHIP_FIJI:
adev->has_uvd = true;
adev->cg_flags = 0;
adev->cg_flags = AMDGPU_CG_SUPPORT_UVD_MGCG |
AMDGPU_CG_SUPPORT_VCE_MGCG;
adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x3c;
break;
@ -1442,6 +1469,8 @@ static int vi_common_early_init(void *handle)
if (amdgpu_smc_load_fw && smc_enabled)
adev->firmware.smu_load = true;
amdgpu_get_pcie_info(adev);
return 0;
}
@ -1515,9 +1544,95 @@ static int vi_common_soft_reset(void *handle)
return 0;
}
static void fiji_update_bif_medium_grain_light_sleep(struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
temp = data = RREG32_PCIE(ixPCIE_CNTL2);
if (enable)
data |= PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
PCIE_CNTL2__MST_MEM_LS_EN_MASK |
PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK;
else
data &= ~(PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
PCIE_CNTL2__MST_MEM_LS_EN_MASK |
PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK);
if (temp != data)
WREG32_PCIE(ixPCIE_CNTL2, data);
}
static void fiji_update_hdp_medium_grain_clock_gating(struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
temp = data = RREG32(mmHDP_HOST_PATH_CNTL);
if (enable)
data &= ~HDP_HOST_PATH_CNTL__CLOCK_GATING_DIS_MASK;
else
data |= HDP_HOST_PATH_CNTL__CLOCK_GATING_DIS_MASK;
if (temp != data)
WREG32(mmHDP_HOST_PATH_CNTL, data);
}
static void fiji_update_hdp_light_sleep(struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
temp = data = RREG32(mmHDP_MEM_POWER_LS);
if (enable)
data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
else
data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
if (temp != data)
WREG32(mmHDP_MEM_POWER_LS, data);
}
static void fiji_update_rom_medium_grain_clock_gating(struct amdgpu_device *adev,
bool enable)
{
uint32_t temp, data;
temp = data = RREG32_SMC(ixCGTT_ROM_CLK_CTRL0);
if (enable)
data &= ~(CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK);
else
data |= CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK;
if (temp != data)
WREG32_SMC(ixCGTT_ROM_CLK_CTRL0, data);
}
static int vi_common_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
switch (adev->asic_type) {
case CHIP_FIJI:
fiji_update_bif_medium_grain_light_sleep(adev,
state == AMD_CG_STATE_GATE ? true : false);
fiji_update_hdp_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
fiji_update_hdp_light_sleep(adev,
state == AMD_CG_STATE_GATE ? true : false);
fiji_update_rom_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE ? true : false);
break;
default:
break;
}
return 0;
}

View File

@ -21,14 +21,63 @@
*
*/
#ifndef AMDGPU_ACPI_H
#define AMDGPU_ACPI_H
#ifndef AMD_ACPI_H
#define AMD_ACPI_H
struct amdgpu_device;
struct acpi_bus_event;
#define ACPI_AC_CLASS "ac_adapter"
int amdgpu_atif_handler(struct amdgpu_device *adev,
struct acpi_bus_event *event);
struct atif_verify_interface {
u16 size; /* structure size in bytes (includes size field) */
u16 version; /* version */
u32 notification_mask; /* supported notifications mask */
u32 function_bits; /* supported functions bit vector */
} __packed;
struct atif_system_params {
u16 size; /* structure size in bytes (includes size field) */
u32 valid_mask; /* valid flags mask */
u32 flags; /* flags */
u8 command_code; /* notify command code */
} __packed;
struct atif_sbios_requests {
u16 size; /* structure size in bytes (includes size field) */
u32 pending; /* pending sbios requests */
u8 panel_exp_mode; /* panel expansion mode */
u8 thermal_gfx; /* thermal state: target gfx controller */
u8 thermal_state; /* thermal state: state id (0: exit state, non-0: state) */
u8 forced_power_gfx; /* forced power state: target gfx controller */
u8 forced_power_state; /* forced power state: state id */
u8 system_power_src; /* system power source */
u8 backlight_level; /* panel backlight level (0-255) */
} __packed;
#define ATIF_NOTIFY_MASK 0x3
#define ATIF_NOTIFY_NONE 0
#define ATIF_NOTIFY_81 1
#define ATIF_NOTIFY_N 2
struct atcs_verify_interface {
u16 size; /* structure size in bytes (includes size field) */
u16 version; /* version */
u32 function_bits; /* supported functions bit vector */
} __packed;
#define ATCS_VALID_FLAGS_MASK 0x3
struct atcs_pref_req_input {
u16 size; /* structure size in bytes (includes size field) */
u16 client_id; /* client id (bit 2-0: func num, 7-3: dev num, 15-8: bus num) */
u16 valid_flags_mask; /* valid flags mask */
u16 flags; /* flags */
u8 req_type; /* request type */
u8 perf_req; /* performance request */
} __packed;
struct atcs_pref_req_output {
u16 size; /* structure size in bytes (includes size field) */
u8 ret_val; /* return value */
} __packed;
/* AMD hw uses four ACPI control methods:
* 1. ATIF

View File

@ -0,0 +1,50 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef __AMD_PCIE_H__
#define __AMD_PCIE_H__
/* Following flags shows PCIe link speed supported in driver which are decided by chipset and ASIC */
#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1 0x00010000
#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 0x00020000
#define CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3 0x00040000
#define CAIL_PCIE_LINK_SPEED_SUPPORT_MASK 0xFFFF0000
#define CAIL_PCIE_LINK_SPEED_SUPPORT_SHIFT 16
/* Following flags shows PCIe link speed supported by ASIC H/W.*/
#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1 0x00000001
#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2 0x00000002
#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3 0x00000004
#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_MASK 0x0000FFFF
#define CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_SHIFT 0
/* Following flags shows PCIe lane width switch supported in driver which are decided by chipset and ASIC */
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X1 0x00010000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 0x00020000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 0x00040000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 0x00080000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 0x00100000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 0x00200000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_X32 0x00400000
#define CAIL_PCIE_LINK_WIDTH_SUPPORT_SHIFT 16
#endif

View File

@ -0,0 +1,141 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef __AMD_PCIE_HELPERS_H__
#define __AMD_PCIE_HELPERS_H__
#include "amd_pcie.h"
static inline bool is_pcie_gen3_supported(uint32_t pcie_link_speed_cap)
{
if (pcie_link_speed_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
return true;
return false;
}
static inline bool is_pcie_gen2_supported(uint32_t pcie_link_speed_cap)
{
if (pcie_link_speed_cap & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
return true;
return false;
}
/* Get the new PCIE speed given the ASIC PCIE Cap and the NewState's requested PCIE speed*/
static inline uint16_t get_pcie_gen_support(uint32_t pcie_link_speed_cap,
uint16_t ns_pcie_gen)
{
uint32_t asic_pcie_link_speed_cap = (pcie_link_speed_cap &
CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_MASK);
uint32_t sys_pcie_link_speed_cap = (pcie_link_speed_cap &
CAIL_PCIE_LINK_SPEED_SUPPORT_MASK);
switch (asic_pcie_link_speed_cap) {
case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1:
return PP_PCIEGen1;
case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2:
return PP_PCIEGen2;
case CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3:
return PP_PCIEGen3;
default:
if (is_pcie_gen3_supported(sys_pcie_link_speed_cap) &&
(ns_pcie_gen == PP_PCIEGen3)) {
return PP_PCIEGen3;
} else if (is_pcie_gen2_supported(sys_pcie_link_speed_cap) &&
((ns_pcie_gen == PP_PCIEGen3) || (ns_pcie_gen == PP_PCIEGen2))) {
return PP_PCIEGen2;
}
}
return PP_PCIEGen1;
}
static inline uint16_t get_pcie_lane_support(uint32_t pcie_lane_width_cap,
uint16_t ns_pcie_lanes)
{
int i, j;
uint16_t new_pcie_lanes = ns_pcie_lanes;
uint16_t pcie_lanes[7] = {1, 2, 4, 8, 12, 16, 32};
switch (pcie_lane_width_cap) {
case 0:
printk(KERN_ERR "No valid PCIE lane width reported");
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X1:
new_pcie_lanes = 1;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X2:
new_pcie_lanes = 2;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X4:
new_pcie_lanes = 4;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X8:
new_pcie_lanes = 8;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X12:
new_pcie_lanes = 12;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X16:
new_pcie_lanes = 16;
break;
case CAIL_PCIE_LINK_WIDTH_SUPPORT_X32:
new_pcie_lanes = 32;
break;
default:
for (i = 0; i < 7; i++) {
if (ns_pcie_lanes == pcie_lanes[i]) {
if (pcie_lane_width_cap & (0x10000 << i)) {
break;
} else {
for (j = i - 1; j >= 0; j--) {
if (pcie_lane_width_cap & (0x10000 << j)) {
new_pcie_lanes = pcie_lanes[j];
break;
}
}
if (j < 0) {
for (j = i + 1; j < 7; j++) {
if (pcie_lane_width_cap & (0x10000 << j)) {
new_pcie_lanes = pcie_lanes[j];
break;
}
}
if (j > 7)
printk(KERN_ERR "Cannot find a valid PCIE lane width!");
}
}
break;
}
}
break;
}
return new_pcie_lanes;
}
#endif

View File

@ -85,6 +85,27 @@ enum amd_powergating_state {
AMD_PG_STATE_UNGATE,
};
enum amd_pm_state_type {
/* not used for dpm */
POWER_STATE_TYPE_DEFAULT,
POWER_STATE_TYPE_POWERSAVE,
/* user selectable states */
POWER_STATE_TYPE_BATTERY,
POWER_STATE_TYPE_BALANCED,
POWER_STATE_TYPE_PERFORMANCE,
/* internal states */
POWER_STATE_TYPE_INTERNAL_UVD,
POWER_STATE_TYPE_INTERNAL_UVD_SD,
POWER_STATE_TYPE_INTERNAL_UVD_HD,
POWER_STATE_TYPE_INTERNAL_UVD_HD2,
POWER_STATE_TYPE_INTERNAL_UVD_MVC,
POWER_STATE_TYPE_INTERNAL_BOOT,
POWER_STATE_TYPE_INTERNAL_THERMAL,
POWER_STATE_TYPE_INTERNAL_ACPI,
POWER_STATE_TYPE_INTERNAL_ULV,
POWER_STATE_TYPE_INTERNAL_3DPERF,
};
struct amd_ip_funcs {
/* sets up early driver state (pre sw_init), does not configure hw - Optional */
int (*early_init)(void *handle);

View File

@ -596,6 +596,7 @@
#define mmSWRST_EP_CONTROL_0 0x14ac
#define mmCPM_CONTROL 0x14b8
#define mmGSKT_CONTROL 0x14bf
#define ixSWRST_COMMAND_1 0x1400103
#define ixLM_CONTROL 0x1400120
#define ixLM_PCIETXMUX0 0x1400121
#define ixLM_PCIETXMUX1 0x1400122

View File

@ -2807,5 +2807,18 @@
#define ixDIDT_DBR_WEIGHT0_3 0x90
#define ixDIDT_DBR_WEIGHT4_7 0x91
#define ixDIDT_DBR_WEIGHT8_11 0x92
#define mmTD_EDC_CNT 0x252e
#define mmCPF_EDC_TAG_CNT 0x3188
#define mmCPF_EDC_ROQ_CNT 0x3189
#define mmCPF_EDC_ATC_CNT 0x318a
#define mmCPG_EDC_TAG_CNT 0x318b
#define mmCPG_EDC_ATC_CNT 0x318c
#define mmCPG_EDC_DMA_CNT 0x318d
#define mmCPC_EDC_SCRATCH_CNT 0x318e
#define mmCPC_EDC_UCODE_CNT 0x318f
#define mmCPC_EDC_ATC_CNT 0x3190
#define mmDC_EDC_STATE_CNT 0x3191
#define mmDC_EDC_CSINVOC_CNT 0x3192
#define mmDC_EDC_RESTORE_CNT 0x3193
#endif /* GFX_8_0_D_H */

View File

@ -550,6 +550,13 @@ typedef struct _COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_1
//MPLL_CNTL_FLAG_BYPASS_AD_PLL has a wrong name, should be BYPASS_DQ_PLL
#define MPLL_CNTL_FLAG_BYPASS_AD_PLL 0x04
// use for ComputeMemoryClockParamTable
typedef struct _COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_2
{
COMPUTE_MEMORY_ENGINE_PLL_PARAMETERS_V4 ulClock;
ULONG ulReserved;
}COMPUTE_MEMORY_CLOCK_PARAM_PARAMETERS_V2_2;
typedef struct _DYNAMICE_MEMORY_SETTINGS_PARAMETER
{
ATOM_COMPUTE_CLOCK_FREQ ulClock;
@ -4988,6 +4995,78 @@ typedef struct _ATOM_ASIC_PROFILING_INFO_V3_3
ULONG ulSDCMargine;
}ATOM_ASIC_PROFILING_INFO_V3_3;
// for Fiji speed EVV algorithm
typedef struct _ATOM_ASIC_PROFILING_INFO_V3_4
{
ATOM_COMMON_TABLE_HEADER asHeader;
ULONG ulEvvLkgFactor;
ULONG ulBoardCoreTemp;
ULONG ulMaxVddc;
ULONG ulMinVddc;
ULONG ulLoadLineSlop;
ULONG ulLeakageTemp;
ULONG ulLeakageVoltage;
EFUSE_LINEAR_FUNC_PARAM sCACm;
EFUSE_LINEAR_FUNC_PARAM sCACb;
EFUSE_LOGISTIC_FUNC_PARAM sKt_b;
EFUSE_LOGISTIC_FUNC_PARAM sKv_m;
EFUSE_LOGISTIC_FUNC_PARAM sKv_b;
USHORT usLkgEuseIndex;
UCHAR ucLkgEfuseBitLSB;
UCHAR ucLkgEfuseLength;
ULONG ulLkgEncodeLn_MaxDivMin;
ULONG ulLkgEncodeMax;
ULONG ulLkgEncodeMin;
ULONG ulEfuseLogisticAlpha;
USHORT usPowerDpm0;
USHORT usPowerDpm1;
USHORT usPowerDpm2;
USHORT usPowerDpm3;
USHORT usPowerDpm4;
USHORT usPowerDpm5;
USHORT usPowerDpm6;
USHORT usPowerDpm7;
ULONG ulTdpDerateDPM0;
ULONG ulTdpDerateDPM1;
ULONG ulTdpDerateDPM2;
ULONG ulTdpDerateDPM3;
ULONG ulTdpDerateDPM4;
ULONG ulTdpDerateDPM5;
ULONG ulTdpDerateDPM6;
ULONG ulTdpDerateDPM7;
EFUSE_LINEAR_FUNC_PARAM sRoFuse;
ULONG ulEvvDefaultVddc;
ULONG ulEvvNoCalcVddc;
USHORT usParamNegFlag;
USHORT usSpeed_Model;
ULONG ulSM_A0;
ULONG ulSM_A1;
ULONG ulSM_A2;
ULONG ulSM_A3;
ULONG ulSM_A4;
ULONG ulSM_A5;
ULONG ulSM_A6;
ULONG ulSM_A7;
UCHAR ucSM_A0_sign;
UCHAR ucSM_A1_sign;
UCHAR ucSM_A2_sign;
UCHAR ucSM_A3_sign;
UCHAR ucSM_A4_sign;
UCHAR ucSM_A5_sign;
UCHAR ucSM_A6_sign;
UCHAR ucSM_A7_sign;
ULONG ulMargin_RO_a;
ULONG ulMargin_RO_b;
ULONG ulMargin_RO_c;
ULONG ulMargin_fixed;
ULONG ulMargin_Fmax_mean;
ULONG ulMargin_plat_mean;
ULONG ulMargin_Fmax_sigma;
ULONG ulMargin_plat_sigma;
ULONG ulMargin_DC_sigma;
ULONG ulReserved[8]; // Reserved for future ASIC
}ATOM_ASIC_PROFILING_INFO_V3_4;
typedef struct _ATOM_POWER_SOURCE_OBJECT
{
UCHAR ucPwrSrcId; // Power source

View File

@ -105,6 +105,34 @@ enum cgs_ucode_id {
CGS_UCODE_ID_MAXIMUM,
};
enum cgs_system_info_id {
CGS_SYSTEM_INFO_ADAPTER_BDF_ID = 1,
CGS_SYSTEM_INFO_PCIE_GEN_INFO,
CGS_SYSTEM_INFO_PCIE_MLW,
CGS_SYSTEM_INFO_ID_MAXIMUM,
};
struct cgs_system_info {
uint64_t size;
uint64_t info_id;
union {
void *ptr;
uint64_t value;
};
uint64_t padding[13];
};
/*
* enum cgs_resource_type - GPU resource type
*/
enum cgs_resource_type {
CGS_RESOURCE_TYPE_MMIO = 0,
CGS_RESOURCE_TYPE_FB,
CGS_RESOURCE_TYPE_IO,
CGS_RESOURCE_TYPE_DOORBELL,
CGS_RESOURCE_TYPE_ROM,
};
/**
* struct cgs_clock_limits - Clock limits
*
@ -127,8 +155,53 @@ struct cgs_firmware_info {
void *kptr;
};
struct cgs_mode_info {
uint32_t refresh_rate;
uint32_t ref_clock;
uint32_t vblank_time_us;
};
struct cgs_display_info {
uint32_t display_count;
uint32_t active_display_mask;
struct cgs_mode_info *mode_info;
};
typedef unsigned long cgs_handle_t;
#define CGS_ACPI_METHOD_ATCS 0x53435441
#define CGS_ACPI_METHOD_ATIF 0x46495441
#define CGS_ACPI_METHOD_ATPX 0x58505441
#define CGS_ACPI_FIELD_METHOD_NAME 0x00000001
#define CGS_ACPI_FIELD_INPUT_ARGUMENT_COUNT 0x00000002
#define CGS_ACPI_MAX_BUFFER_SIZE 256
#define CGS_ACPI_TYPE_ANY 0x00
#define CGS_ACPI_TYPE_INTEGER 0x01
#define CGS_ACPI_TYPE_STRING 0x02
#define CGS_ACPI_TYPE_BUFFER 0x03
#define CGS_ACPI_TYPE_PACKAGE 0x04
struct cgs_acpi_method_argument {
uint32_t type;
uint32_t method_length;
uint32_t data_length;
union{
uint32_t value;
void *pointer;
};
};
struct cgs_acpi_method_info {
uint32_t size;
uint32_t field;
uint32_t input_count;
uint32_t name;
struct cgs_acpi_method_argument *pinput_argument;
uint32_t output_count;
struct cgs_acpi_method_argument *poutput_argument;
uint32_t padding[9];
};
/**
* cgs_gpu_mem_info() - Return information about memory heaps
* @cgs_device: opaque device handle
@ -355,6 +428,23 @@ typedef void (*cgs_write_pci_config_word_t)(void *cgs_device, unsigned addr,
typedef void (*cgs_write_pci_config_dword_t)(void *cgs_device, unsigned addr,
uint32_t value);
/**
* cgs_get_pci_resource() - provide access to a device resource (PCI BAR)
* @cgs_device: opaque device handle
* @resource_type: Type of Resource (MMIO, IO, ROM, FB, DOORBELL)
* @size: size of the region
* @offset: offset from the start of the region
* @resource_base: base address (not including offset) returned
*
* Return: 0 on success, -errno otherwise
*/
typedef int (*cgs_get_pci_resource_t)(void *cgs_device,
enum cgs_resource_type resource_type,
uint64_t size,
uint64_t offset,
uint64_t *resource_base);
/**
* cgs_atom_get_data_table() - Get a pointer to an ATOM BIOS data table
* @cgs_device: opaque device handle
@ -493,6 +583,21 @@ typedef int(*cgs_set_clockgating_state)(void *cgs_device,
enum amd_ip_block_type block_type,
enum amd_clockgating_state state);
typedef int(*cgs_get_active_displays_info)(
void *cgs_device,
struct cgs_display_info *info);
typedef int (*cgs_call_acpi_method)(void *cgs_device,
uint32_t acpi_method,
uint32_t acpi_function,
void *pinput, void *poutput,
uint32_t output_count,
uint32_t input_size,
uint32_t output_size);
typedef int (*cgs_query_system_info)(void *cgs_device,
struct cgs_system_info *sys_info);
struct cgs_ops {
/* memory management calls (similar to KFD interface) */
cgs_gpu_mem_info_t gpu_mem_info;
@ -516,6 +621,8 @@ struct cgs_ops {
cgs_write_pci_config_byte_t write_pci_config_byte;
cgs_write_pci_config_word_t write_pci_config_word;
cgs_write_pci_config_dword_t write_pci_config_dword;
/* PCI resources */
cgs_get_pci_resource_t get_pci_resource;
/* ATOM BIOS */
cgs_atom_get_data_table_t atom_get_data_table;
cgs_atom_get_cmd_table_revs_t atom_get_cmd_table_revs;
@ -533,7 +640,12 @@ struct cgs_ops {
/* cg pg interface*/
cgs_set_powergating_state set_powergating_state;
cgs_set_clockgating_state set_clockgating_state;
/* ACPI (TODO) */
/* display manager */
cgs_get_active_displays_info get_active_displays_info;
/* ACPI */
cgs_call_acpi_method call_acpi_method;
/* get system info */
cgs_query_system_info query_system_info;
};
struct cgs_os_ops; /* To be define in OS-specific CGS header */
@ -620,5 +732,15 @@ struct cgs_device
CGS_CALL(set_powergating_state, dev, block_type, state)
#define cgs_set_clockgating_state(dev, block_type, state) \
CGS_CALL(set_clockgating_state, dev, block_type, state)
#define cgs_get_active_displays_info(dev, info) \
CGS_CALL(get_active_displays_info, dev, info)
#define cgs_call_acpi_method(dev, acpi_method, acpi_function, pintput, poutput, output_count, input_size, output_size) \
CGS_CALL(call_acpi_method, dev, acpi_method, acpi_function, pintput, poutput, output_count, input_size, output_size)
#define cgs_query_system_info(dev, sys_info) \
CGS_CALL(query_system_info, dev, sys_info)
#define cgs_get_pci_resource(cgs_device, resource_type, size, offset, \
resource_base) \
CGS_CALL(get_pci_resource, cgs_device, resource_type, size, offset, \
resource_base)
#endif /* _CGS_COMMON_H */

View File

@ -0,0 +1,6 @@
config DRM_AMD_POWERPLAY
bool "Enable AMD powerplay component"
depends on DRM_AMDGPU
default n
help
select this option will enable AMD powerplay component.

View File

@ -0,0 +1,22 @@
subdir-ccflags-y += -Iinclude/drm \
-Idrivers/gpu/drm/amd/powerplay/inc/ \
-Idrivers/gpu/drm/amd/include/asic_reg \
-Idrivers/gpu/drm/amd/include \
-Idrivers/gpu/drm/amd/powerplay/smumgr\
-Idrivers/gpu/drm/amd/powerplay/hwmgr \
-Idrivers/gpu/drm/amd/powerplay/eventmgr
AMD_PP_PATH = ../powerplay
PP_LIBS = smumgr hwmgr eventmgr
AMD_POWERPLAY = $(addsuffix /Makefile,$(addprefix drivers/gpu/drm/amd/powerplay/,$(PP_LIBS)))
include $(AMD_POWERPLAY)
POWER_MGR = amd_powerplay.o
AMD_PP_POWER = $(addprefix $(AMD_PP_PATH)/,$(POWER_MGR))
AMD_POWERPLAY_FILES += $(AMD_PP_POWER)

View File

@ -0,0 +1,660 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/gfp.h>
#include <linux/slab.h>
#include "amd_shared.h"
#include "amd_powerplay.h"
#include "pp_instance.h"
#include "power_state.h"
#include "eventmanager.h"
#define PP_CHECK(handle) \
do { \
if ((handle) == NULL || (handle)->pp_valid != PP_VALID) \
return -EINVAL; \
} while (0)
static int pp_early_init(void *handle)
{
return 0;
}
static int pp_sw_init(void *handle)
{
struct pp_instance *pp_handle;
struct pp_hwmgr *hwmgr;
int ret = 0;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
hwmgr = pp_handle->hwmgr;
if (hwmgr == NULL || hwmgr->pptable_func == NULL ||
hwmgr->hwmgr_func == NULL ||
hwmgr->pptable_func->pptable_init == NULL ||
hwmgr->hwmgr_func->backend_init == NULL)
return -EINVAL;
ret = hwmgr->pptable_func->pptable_init(hwmgr);
if (ret == 0)
ret = hwmgr->hwmgr_func->backend_init(hwmgr);
return ret;
}
static int pp_sw_fini(void *handle)
{
struct pp_instance *pp_handle;
struct pp_hwmgr *hwmgr;
int ret = 0;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
hwmgr = pp_handle->hwmgr;
if (hwmgr != NULL || hwmgr->hwmgr_func != NULL ||
hwmgr->hwmgr_func->backend_fini != NULL)
ret = hwmgr->hwmgr_func->backend_fini(hwmgr);
return ret;
}
static int pp_hw_init(void *handle)
{
struct pp_instance *pp_handle;
struct pp_smumgr *smumgr;
struct pp_eventmgr *eventmgr;
int ret = 0;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
smumgr = pp_handle->smu_mgr;
if (smumgr == NULL || smumgr->smumgr_funcs == NULL ||
smumgr->smumgr_funcs->smu_init == NULL ||
smumgr->smumgr_funcs->start_smu == NULL)
return -EINVAL;
ret = smumgr->smumgr_funcs->smu_init(smumgr);
if (ret) {
printk(KERN_ERR "[ powerplay ] smc initialization failed\n");
return ret;
}
ret = smumgr->smumgr_funcs->start_smu(smumgr);
if (ret) {
printk(KERN_ERR "[ powerplay ] smc start failed\n");
smumgr->smumgr_funcs->smu_fini(smumgr);
return ret;
}
hw_init_power_state_table(pp_handle->hwmgr);
eventmgr = pp_handle->eventmgr;
if (eventmgr == NULL || eventmgr->pp_eventmgr_init == NULL)
return -EINVAL;
ret = eventmgr->pp_eventmgr_init(eventmgr);
return 0;
}
static int pp_hw_fini(void *handle)
{
struct pp_instance *pp_handle;
struct pp_smumgr *smumgr;
struct pp_eventmgr *eventmgr;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
eventmgr = pp_handle->eventmgr;
if (eventmgr != NULL || eventmgr->pp_eventmgr_fini != NULL)
eventmgr->pp_eventmgr_fini(eventmgr);
smumgr = pp_handle->smu_mgr;
if (smumgr != NULL || smumgr->smumgr_funcs != NULL ||
smumgr->smumgr_funcs->smu_fini != NULL)
smumgr->smumgr_funcs->smu_fini(smumgr);
return 0;
}
static bool pp_is_idle(void *handle)
{
return 0;
}
static int pp_wait_for_idle(void *handle)
{
return 0;
}
static int pp_sw_reset(void *handle)
{
return 0;
}
static void pp_print_status(void *handle)
{
}
static int pp_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
return 0;
}
static int pp_set_powergating_state(void *handle,
enum amd_powergating_state state)
{
return 0;
}
static int pp_suspend(void *handle)
{
struct pp_instance *pp_handle;
struct pp_eventmgr *eventmgr;
struct pem_event_data event_data = { {0} };
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
eventmgr = pp_handle->eventmgr;
pem_handle_event(eventmgr, AMD_PP_EVENT_SUSPEND, &event_data);
return 0;
}
static int pp_resume(void *handle)
{
struct pp_instance *pp_handle;
struct pp_eventmgr *eventmgr;
struct pem_event_data event_data = { {0} };
struct pp_smumgr *smumgr;
int ret;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
smumgr = pp_handle->smu_mgr;
if (smumgr == NULL || smumgr->smumgr_funcs == NULL ||
smumgr->smumgr_funcs->start_smu == NULL)
return -EINVAL;
ret = smumgr->smumgr_funcs->start_smu(smumgr);
if (ret) {
printk(KERN_ERR "[ powerplay ] smc start failed\n");
smumgr->smumgr_funcs->smu_fini(smumgr);
return ret;
}
eventmgr = pp_handle->eventmgr;
pem_handle_event(eventmgr, AMD_PP_EVENT_RESUME, &event_data);
return 0;
}
const struct amd_ip_funcs pp_ip_funcs = {
.early_init = pp_early_init,
.late_init = NULL,
.sw_init = pp_sw_init,
.sw_fini = pp_sw_fini,
.hw_init = pp_hw_init,
.hw_fini = pp_hw_fini,
.suspend = pp_suspend,
.resume = pp_resume,
.is_idle = pp_is_idle,
.wait_for_idle = pp_wait_for_idle,
.soft_reset = pp_sw_reset,
.print_status = pp_print_status,
.set_clockgating_state = pp_set_clockgating_state,
.set_powergating_state = pp_set_powergating_state,
};
static int pp_dpm_load_fw(void *handle)
{
return 0;
}
static int pp_dpm_fw_loading_complete(void *handle)
{
return 0;
}
static int pp_dpm_force_performance_level(void *handle,
enum amd_dpm_forced_level level)
{
struct pp_instance *pp_handle;
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
pp_handle = (struct pp_instance *)handle;
hwmgr = pp_handle->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->force_dpm_level == NULL)
return -EINVAL;
hwmgr->hwmgr_func->force_dpm_level(hwmgr, level);
return 0;
}
static enum amd_dpm_forced_level pp_dpm_get_performance_level(
void *handle)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL)
return -EINVAL;
return (((struct pp_instance *)handle)->hwmgr->dpm_level);
}
static int pp_dpm_get_sclk(void *handle, bool low)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->get_sclk == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->get_sclk(hwmgr, low);
}
static int pp_dpm_get_mclk(void *handle, bool low)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->get_mclk == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->get_mclk(hwmgr, low);
}
static int pp_dpm_powergate_vce(void *handle, bool gate)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->powergate_vce == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->powergate_vce(hwmgr, gate);
}
static int pp_dpm_powergate_uvd(void *handle, bool gate)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->powergate_uvd == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->powergate_uvd(hwmgr, gate);
}
static enum PP_StateUILabel power_state_convert(enum amd_pm_state_type state)
{
switch (state) {
case POWER_STATE_TYPE_BATTERY:
return PP_StateUILabel_Battery;
case POWER_STATE_TYPE_BALANCED:
return PP_StateUILabel_Balanced;
case POWER_STATE_TYPE_PERFORMANCE:
return PP_StateUILabel_Performance;
default:
return PP_StateUILabel_None;
}
}
int pp_dpm_dispatch_tasks(void *handle, enum amd_pp_event event_id, void *input, void *output)
{
int ret = 0;
struct pp_instance *pp_handle;
struct pem_event_data data = { {0} };
pp_handle = (struct pp_instance *)handle;
if (pp_handle == NULL)
return -EINVAL;
switch (event_id) {
case AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE:
ret = pem_handle_event(pp_handle->eventmgr, event_id, &data);
break;
case AMD_PP_EVENT_ENABLE_USER_STATE:
{
enum amd_pm_state_type ps;
if (input == NULL)
return -EINVAL;
ps = *(unsigned long *)input;
data.requested_ui_label = power_state_convert(ps);
ret = pem_handle_event(pp_handle->eventmgr, event_id, &data);
}
break;
default:
break;
}
return ret;
}
enum amd_pm_state_type pp_dpm_get_current_power_state(void *handle)
{
struct pp_hwmgr *hwmgr;
struct pp_power_state *state;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->current_ps == NULL)
return -EINVAL;
state = hwmgr->current_ps;
switch (state->classification.ui_label) {
case PP_StateUILabel_Battery:
return POWER_STATE_TYPE_BATTERY;
case PP_StateUILabel_Balanced:
return POWER_STATE_TYPE_BALANCED;
case PP_StateUILabel_Performance:
return POWER_STATE_TYPE_PERFORMANCE;
default:
return POWER_STATE_TYPE_DEFAULT;
}
}
static void
pp_debugfs_print_current_performance_level(void *handle,
struct seq_file *m)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->print_current_perforce_level == NULL)
return;
hwmgr->hwmgr_func->print_current_perforce_level(hwmgr, m);
}
static int pp_dpm_set_fan_control_mode(void *handle, uint32_t mode)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->set_fan_control_mode == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->set_fan_control_mode(hwmgr, mode);
}
static int pp_dpm_get_fan_control_mode(void *handle)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->get_fan_control_mode == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->get_fan_control_mode(hwmgr);
}
static int pp_dpm_set_fan_speed_percent(void *handle, uint32_t percent)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->set_fan_speed_percent == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->set_fan_speed_percent(hwmgr, percent);
}
static int pp_dpm_get_fan_speed_percent(void *handle, uint32_t *speed)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->get_fan_speed_percent == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->get_fan_speed_percent(hwmgr, speed);
}
static int pp_dpm_get_temperature(void *handle)
{
struct pp_hwmgr *hwmgr;
if (handle == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
if (hwmgr == NULL || hwmgr->hwmgr_func == NULL ||
hwmgr->hwmgr_func->get_temperature == NULL)
return -EINVAL;
return hwmgr->hwmgr_func->get_temperature(hwmgr);
}
const struct amd_powerplay_funcs pp_dpm_funcs = {
.get_temperature = pp_dpm_get_temperature,
.load_firmware = pp_dpm_load_fw,
.wait_for_fw_loading_complete = pp_dpm_fw_loading_complete,
.force_performance_level = pp_dpm_force_performance_level,
.get_performance_level = pp_dpm_get_performance_level,
.get_current_power_state = pp_dpm_get_current_power_state,
.get_sclk = pp_dpm_get_sclk,
.get_mclk = pp_dpm_get_mclk,
.powergate_vce = pp_dpm_powergate_vce,
.powergate_uvd = pp_dpm_powergate_uvd,
.dispatch_tasks = pp_dpm_dispatch_tasks,
.print_current_performance_level = pp_debugfs_print_current_performance_level,
.set_fan_control_mode = pp_dpm_set_fan_control_mode,
.get_fan_control_mode = pp_dpm_get_fan_control_mode,
.set_fan_speed_percent = pp_dpm_set_fan_speed_percent,
.get_fan_speed_percent = pp_dpm_get_fan_speed_percent,
};
static int amd_pp_instance_init(struct amd_pp_init *pp_init,
struct amd_powerplay *amd_pp)
{
int ret;
struct pp_instance *handle;
handle = kzalloc(sizeof(struct pp_instance), GFP_KERNEL);
if (handle == NULL)
return -ENOMEM;
handle->pp_valid = PP_VALID;
ret = smum_init(pp_init, handle);
if (ret)
goto fail_smum;
ret = hwmgr_init(pp_init, handle);
if (ret)
goto fail_hwmgr;
ret = eventmgr_init(handle);
if (ret)
goto fail_eventmgr;
amd_pp->pp_handle = handle;
return 0;
fail_eventmgr:
hwmgr_fini(handle->hwmgr);
fail_hwmgr:
smum_fini(handle->smu_mgr);
fail_smum:
kfree(handle);
return ret;
}
static int amd_pp_instance_fini(void *handle)
{
struct pp_instance *instance = (struct pp_instance *)handle;
if (instance == NULL)
return -EINVAL;
eventmgr_fini(instance->eventmgr);
hwmgr_fini(instance->hwmgr);
smum_fini(instance->smu_mgr);
kfree(handle);
return 0;
}
int amd_powerplay_init(struct amd_pp_init *pp_init,
struct amd_powerplay *amd_pp)
{
int ret;
if (pp_init == NULL || amd_pp == NULL)
return -EINVAL;
ret = amd_pp_instance_init(pp_init, amd_pp);
if (ret)
return ret;
amd_pp->ip_funcs = &pp_ip_funcs;
amd_pp->pp_funcs = &pp_dpm_funcs;
return 0;
}
int amd_powerplay_fini(void *handle)
{
amd_pp_instance_fini(handle);
return 0;
}
/* export this function to DAL */
int amd_powerplay_display_configuration_change(void *handle, const void *input)
{
struct pp_hwmgr *hwmgr;
const struct amd_pp_display_configuration *display_config = input;
PP_CHECK((struct pp_instance *)handle);
hwmgr = ((struct pp_instance *)handle)->hwmgr;
phm_store_dal_configuration_data(hwmgr, display_config);
return 0;
}
int amd_powerplay_get_display_power_level(void *handle,
struct amd_pp_dal_clock_info *output)
{
struct pp_hwmgr *hwmgr;
PP_CHECK((struct pp_instance *)handle);
if (output == NULL)
return -EINVAL;
hwmgr = ((struct pp_instance *)handle)->hwmgr;
return phm_get_dal_power_level(hwmgr, output);
}

View File

@ -0,0 +1,11 @@
#
# Makefile for the 'event manager' sub-component of powerplay.
# It provides the event management services for the driver.
EVENT_MGR = eventmgr.o eventinit.o eventmanagement.o \
eventactionchains.o eventsubchains.o eventtasks.o psm.o
AMD_PP_EVENT = $(addprefix $(AMD_PP_PATH)/eventmgr/,$(EVENT_MGR))
AMD_POWERPLAY_FILES += $(AMD_PP_EVENT)

View File

@ -0,0 +1,289 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmgr.h"
#include "eventactionchains.h"
#include "eventsubchains.h"
static const pem_event_action *initialize_event[] = {
block_adjust_power_state_tasks,
power_budget_tasks,
system_config_tasks,
setup_asic_tasks,
enable_dynamic_state_management_tasks,
enable_clock_power_gatings_tasks,
get_2d_performance_state_tasks,
set_performance_state_tasks,
initialize_thermal_controller_tasks,
conditionally_force_3d_performance_state_tasks,
process_vbios_eventinfo_tasks,
broadcast_power_policy_tasks,
NULL
};
const struct action_chain initialize_action_chain = {
"Initialize",
initialize_event
};
static const pem_event_action *uninitialize_event[] = {
ungate_all_display_phys_tasks,
uninitialize_display_phy_access_tasks,
disable_gfx_voltage_island_power_gating_tasks,
disable_gfx_clock_gating_tasks,
set_boot_state_tasks,
adjust_power_state_tasks,
disable_dynamic_state_management_tasks,
disable_clock_power_gatings_tasks,
cleanup_asic_tasks,
prepare_for_pnp_stop_tasks,
NULL
};
const struct action_chain uninitialize_action_chain = {
"Uninitialize",
uninitialize_event
};
static const pem_event_action *power_source_change_event_pp_enabled[] = {
set_power_source_tasks,
set_power_saving_state_tasks,
adjust_power_state_tasks,
enable_disable_fps_tasks,
set_nbmcu_state_tasks,
broadcast_power_policy_tasks,
NULL
};
const struct action_chain power_source_change_action_chain_pp_enabled = {
"Power source change - PowerPlay enabled",
power_source_change_event_pp_enabled
};
static const pem_event_action *power_source_change_event_pp_disabled[] = {
set_power_source_tasks,
set_nbmcu_state_tasks,
NULL
};
const struct action_chain power_source_changes_action_chain_pp_disabled = {
"Power source change - PowerPlay disabled",
power_source_change_event_pp_disabled
};
static const pem_event_action *power_source_change_event_hardware_dc[] = {
set_power_source_tasks,
set_power_saving_state_tasks,
adjust_power_state_tasks,
enable_disable_fps_tasks,
reset_hardware_dc_notification_tasks,
set_nbmcu_state_tasks,
broadcast_power_policy_tasks,
NULL
};
const struct action_chain power_source_change_action_chain_hardware_dc = {
"Power source change - with Hardware DC switching",
power_source_change_event_hardware_dc
};
static const pem_event_action *suspend_event[] = {
reset_display_phy_access_tasks,
unregister_interrupt_tasks,
disable_gfx_voltage_island_power_gating_tasks,
disable_gfx_clock_gating_tasks,
notify_smu_suspend_tasks,
disable_smc_firmware_ctf_tasks,
set_boot_state_tasks,
adjust_power_state_tasks,
disable_fps_tasks,
vari_bright_suspend_tasks,
reset_fan_speed_to_default_tasks,
power_down_asic_tasks,
disable_stutter_mode_tasks,
set_connected_standby_tasks,
block_hw_access_tasks,
NULL
};
const struct action_chain suspend_action_chain = {
"Suspend",
suspend_event
};
static const pem_event_action *resume_event[] = {
unblock_hw_access_tasks,
resume_connected_standby_tasks,
notify_smu_resume_tasks,
reset_display_configCounter_tasks,
update_dal_configuration_tasks,
vari_bright_resume_tasks,
block_adjust_power_state_tasks,
setup_asic_tasks,
enable_stutter_mode_tasks, /*must do this in boot state and before SMC is started */
enable_dynamic_state_management_tasks,
enable_clock_power_gatings_tasks,
enable_disable_bapm_tasks,
initialize_thermal_controller_tasks,
reset_boot_state_tasks,
adjust_power_state_tasks,
enable_disable_fps_tasks,
notify_hw_power_source_tasks,
process_vbios_event_info_tasks,
enable_gfx_clock_gating_tasks,
enable_gfx_voltage_island_power_gating_tasks,
reset_clock_gating_tasks,
notify_smu_vpu_recovery_end_tasks,
disable_vpu_cap_tasks,
execute_escape_sequence_tasks,
NULL
};
const struct action_chain resume_action_chain = {
"resume",
resume_event
};
static const pem_event_action *complete_init_event[] = {
adjust_power_state_tasks,
enable_gfx_clock_gating_tasks,
enable_gfx_voltage_island_power_gating_tasks,
notify_power_state_change_tasks,
NULL
};
const struct action_chain complete_init_action_chain = {
"complete init",
complete_init_event
};
static const pem_event_action *enable_gfx_clock_gating_event[] = {
enable_gfx_clock_gating_tasks,
NULL
};
const struct action_chain enable_gfx_clock_gating_action_chain = {
"enable gfx clock gate",
enable_gfx_clock_gating_event
};
static const pem_event_action *disable_gfx_clock_gating_event[] = {
disable_gfx_clock_gating_tasks,
NULL
};
const struct action_chain disable_gfx_clock_gating_action_chain = {
"disable gfx clock gate",
disable_gfx_clock_gating_event
};
static const pem_event_action *enable_cgpg_event[] = {
enable_cgpg_tasks,
NULL
};
const struct action_chain enable_cgpg_action_chain = {
"eable cg pg",
enable_cgpg_event
};
static const pem_event_action *disable_cgpg_event[] = {
disable_cgpg_tasks,
NULL
};
const struct action_chain disable_cgpg_action_chain = {
"disable cg pg",
disable_cgpg_event
};
/* Enable user _2d performance and activate */
static const pem_event_action *enable_user_state_event[] = {
create_new_user_performance_state_tasks,
adjust_power_state_tasks,
NULL
};
const struct action_chain enable_user_state_action_chain = {
"Enable user state",
enable_user_state_event
};
static const pem_event_action *enable_user_2d_performance_event[] = {
enable_user_2d_performance_tasks,
add_user_2d_performance_state_tasks,
set_performance_state_tasks,
adjust_power_state_tasks,
delete_user_2d_performance_state_tasks,
NULL
};
const struct action_chain enable_user_2d_performance_action_chain = {
"enable_user_2d_performance_event_activate",
enable_user_2d_performance_event
};
static const pem_event_action *disable_user_2d_performance_event[] = {
disable_user_2d_performance_tasks,
delete_user_2d_performance_state_tasks,
NULL
};
const struct action_chain disable_user_2d_performance_action_chain = {
"disable_user_2d_performance_event",
disable_user_2d_performance_event
};
static const pem_event_action *display_config_change_event[] = {
/* countDisplayConfigurationChangeEventTasks, */
unblock_adjust_power_state_tasks,
set_cpu_power_state,
notify_hw_power_source_tasks,
/* updateDALConfigurationTasks,
variBrightDisplayConfigurationChangeTasks, */
adjust_power_state_tasks,
/*enableDisableFPSTasks,
setNBMCUStateTasks,
notifyPCIEDeviceReadyTasks,*/
NULL
};
const struct action_chain display_config_change_action_chain = {
"Display configuration change",
display_config_change_event
};
static const pem_event_action *readjust_power_state_event[] = {
adjust_power_state_tasks,
NULL
};
const struct action_chain readjust_power_state_action_chain = {
"re-adjust power state",
readjust_power_state_event
};

View File

@ -0,0 +1,62 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _EVENT_ACTION_CHAINS_H_
#define _EVENT_ACTION_CHAINS_H_
#include "eventmgr.h"
extern const struct action_chain initialize_action_chain;
extern const struct action_chain uninitialize_action_chain;
extern const struct action_chain power_source_change_action_chain_pp_enabled;
extern const struct action_chain power_source_changes_action_chain_pp_disabled;
extern const struct action_chain power_source_change_action_chain_hardware_dc;
extern const struct action_chain suspend_action_chain;
extern const struct action_chain resume_action_chain;
extern const struct action_chain complete_init_action_chain;
extern const struct action_chain enable_gfx_clock_gating_action_chain;
extern const struct action_chain disable_gfx_clock_gating_action_chain;
extern const struct action_chain enable_cgpg_action_chain;
extern const struct action_chain disable_cgpg_action_chain;
extern const struct action_chain enable_user_2d_performance_action_chain;
extern const struct action_chain disable_user_2d_performance_action_chain;
extern const struct action_chain enable_user_state_action_chain;
extern const struct action_chain readjust_power_state_action_chain;
extern const struct action_chain display_config_change_action_chain;
#endif /*_EVENT_ACTION_CHAINS_H_*/

View File

@ -0,0 +1,195 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmgr.h"
#include "eventinit.h"
#include "ppinterrupt.h"
#include "hardwaremanager.h"
void pem_init_feature_info(struct pp_eventmgr *eventmgr)
{
/* PowerPlay info */
eventmgr->ui_state_info[PP_PowerSource_AC].default_ui_lable =
PP_StateUILabel_Performance;
eventmgr->ui_state_info[PP_PowerSource_AC].current_ui_label =
PP_StateUILabel_Performance;
eventmgr->ui_state_info[PP_PowerSource_DC].default_ui_lable =
PP_StateUILabel_Battery;
eventmgr->ui_state_info[PP_PowerSource_DC].current_ui_label =
PP_StateUILabel_Battery;
if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_PowerPlaySupport)) {
eventmgr->features[PP_Feature_PowerPlay].supported = true;
eventmgr->features[PP_Feature_PowerPlay].version = PEM_CURRENT_POWERPLAY_FEATURE_VERSION;
eventmgr->features[PP_Feature_PowerPlay].enabled_default = true;
eventmgr->features[PP_Feature_PowerPlay].enabled = true;
} else {
eventmgr->features[PP_Feature_PowerPlay].supported = false;
eventmgr->features[PP_Feature_PowerPlay].enabled = false;
eventmgr->features[PP_Feature_PowerPlay].enabled_default = false;
}
eventmgr->features[PP_Feature_Force3DClock].supported = true;
eventmgr->features[PP_Feature_Force3DClock].enabled = false;
eventmgr->features[PP_Feature_Force3DClock].enabled_default = false;
eventmgr->features[PP_Feature_Force3DClock].version = 1;
/* over drive*/
eventmgr->features[PP_Feature_User2DPerformance].version = 4;
eventmgr->features[PP_Feature_User3DPerformance].version = 4;
eventmgr->features[PP_Feature_OverdriveTest].version = 4;
eventmgr->features[PP_Feature_OverDrive].version = 4;
eventmgr->features[PP_Feature_OverDrive].enabled = false;
eventmgr->features[PP_Feature_OverDrive].enabled_default = false;
eventmgr->features[PP_Feature_User2DPerformance].supported = false;
eventmgr->features[PP_Feature_User2DPerformance].enabled = false;
eventmgr->features[PP_Feature_User2DPerformance].enabled_default = false;
eventmgr->features[PP_Feature_User3DPerformance].supported = false;
eventmgr->features[PP_Feature_User3DPerformance].enabled = false;
eventmgr->features[PP_Feature_User3DPerformance].enabled_default = false;
eventmgr->features[PP_Feature_OverdriveTest].supported = false;
eventmgr->features[PP_Feature_OverdriveTest].enabled = false;
eventmgr->features[PP_Feature_OverdriveTest].enabled_default = false;
eventmgr->features[PP_Feature_OverDrive].supported = false;
eventmgr->features[PP_Feature_PowerBudgetWaiver].enabled_default = false;
eventmgr->features[PP_Feature_PowerBudgetWaiver].version = 1;
eventmgr->features[PP_Feature_PowerBudgetWaiver].supported = false;
eventmgr->features[PP_Feature_PowerBudgetWaiver].enabled = false;
/* Multi UVD States support */
eventmgr->features[PP_Feature_MultiUVDState].supported = false;
eventmgr->features[PP_Feature_MultiUVDState].enabled = false;
eventmgr->features[PP_Feature_MultiUVDState].enabled_default = false;
/* Dynamic UVD States support */
eventmgr->features[PP_Feature_DynamicUVDState].supported = false;
eventmgr->features[PP_Feature_DynamicUVDState].enabled = false;
eventmgr->features[PP_Feature_DynamicUVDState].enabled_default = false;
/* VCE DPM support */
eventmgr->features[PP_Feature_VCEDPM].supported = false;
eventmgr->features[PP_Feature_VCEDPM].enabled = false;
eventmgr->features[PP_Feature_VCEDPM].enabled_default = false;
/* ACP PowerGating support */
eventmgr->features[PP_Feature_ACP_POWERGATING].supported = false;
eventmgr->features[PP_Feature_ACP_POWERGATING].enabled = false;
eventmgr->features[PP_Feature_ACP_POWERGATING].enabled_default = false;
/* PPM support */
eventmgr->features[PP_Feature_PPM].version = 1;
eventmgr->features[PP_Feature_PPM].supported = false;
eventmgr->features[PP_Feature_PPM].enabled = false;
/* FFC support (enables fan and temp settings, Gemini needs temp settings) */
if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_ODFuzzyFanControlSupport) ||
phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_GeminiRegulatorFanControlSupport)) {
eventmgr->features[PP_Feature_FFC].version = 1;
eventmgr->features[PP_Feature_FFC].supported = true;
eventmgr->features[PP_Feature_FFC].enabled = true;
eventmgr->features[PP_Feature_FFC].enabled_default = true;
} else {
eventmgr->features[PP_Feature_FFC].supported = false;
eventmgr->features[PP_Feature_FFC].enabled = false;
eventmgr->features[PP_Feature_FFC].enabled_default = false;
}
eventmgr->features[PP_Feature_VariBright].supported = false;
eventmgr->features[PP_Feature_VariBright].enabled = false;
eventmgr->features[PP_Feature_VariBright].enabled_default = false;
eventmgr->features[PP_Feature_BACO].supported = false;
eventmgr->features[PP_Feature_BACO].supported = false;
eventmgr->features[PP_Feature_BACO].enabled_default = false;
/* PowerDown feature support */
eventmgr->features[PP_Feature_PowerDown].supported = false;
eventmgr->features[PP_Feature_PowerDown].enabled = false;
eventmgr->features[PP_Feature_PowerDown].enabled_default = false;
eventmgr->features[PP_Feature_FPS].version = 1;
eventmgr->features[PP_Feature_FPS].supported = false;
eventmgr->features[PP_Feature_FPS].enabled_default = false;
eventmgr->features[PP_Feature_FPS].enabled = false;
eventmgr->features[PP_Feature_ViPG].version = 1;
eventmgr->features[PP_Feature_ViPG].supported = false;
eventmgr->features[PP_Feature_ViPG].enabled_default = false;
eventmgr->features[PP_Feature_ViPG].enabled = false;
}
static int thermal_interrupt_callback(void *private_data,
unsigned src_id, const uint32_t *iv_entry)
{
/* TO DO hanle PEM_Event_ThermalNotification (struct pp_eventmgr *)private_data*/
printk("current thermal is out of range \n");
return 0;
}
int pem_register_interrupts(struct pp_eventmgr *eventmgr)
{
int result = 0;
struct pp_interrupt_registration_info info;
info.call_back = thermal_interrupt_callback;
info.context = eventmgr;
result = phm_register_thermal_interrupt(eventmgr->hwmgr, &info);
/* TODO:
* 2. Register CTF event interrupt
* 3. Register for vbios events interrupt
* 4. Register External Throttle Interrupt
* 5. Register Smc To Host Interrupt
* */
return result;
}
int pem_unregister_interrupts(struct pp_eventmgr *eventmgr)
{
return 0;
}
void pem_uninit_featureInfo(struct pp_eventmgr *eventmgr)
{
eventmgr->features[PP_Feature_MultiUVDState].supported = false;
eventmgr->features[PP_Feature_VariBright].supported = false;
eventmgr->features[PP_Feature_PowerBudgetWaiver].supported = false;
eventmgr->features[PP_Feature_OverDrive].supported = false;
eventmgr->features[PP_Feature_OverdriveTest].supported = false;
eventmgr->features[PP_Feature_User3DPerformance].supported = false;
eventmgr->features[PP_Feature_User2DPerformance].supported = false;
eventmgr->features[PP_Feature_PowerPlay].supported = false;
eventmgr->features[PP_Feature_Force3DClock].supported = false;
}

View File

@ -0,0 +1,34 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _EVENTINIT_H_
#define _EVENTINIT_H_
#define PEM_CURRENT_POWERPLAY_FEATURE_VERSION 4
void pem_init_feature_info(struct pp_eventmgr *eventmgr);
void pem_uninit_featureInfo(struct pp_eventmgr *eventmgr);
int pem_register_interrupts(struct pp_eventmgr *eventmgr);
int pem_unregister_interrupts(struct pp_eventmgr *eventmgr);
#endif /* _EVENTINIT_H_ */

View File

@ -0,0 +1,215 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmanagement.h"
#include "eventmgr.h"
#include "eventactionchains.h"
int pem_init_event_action_chains(struct pp_eventmgr *eventmgr)
{
int i;
for (i = 0; i < AMD_PP_EVENT_MAX; i++)
eventmgr->event_chain[i] = NULL;
eventmgr->event_chain[AMD_PP_EVENT_SUSPEND] = pem_get_suspend_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_INITIALIZE] = pem_get_initialize_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_UNINITIALIZE] = pem_get_uninitialize_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_POWER_SOURCE_CHANGE] = pem_get_power_source_change_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_HIBERNATE] = pem_get_hibernate_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_RESUME] = pem_get_resume_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_THERMAL_NOTIFICATION] = pem_get_thermal_notification_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_VBIOS_NOTIFICATION] = pem_get_vbios_notification_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENTER_THERMAL_STATE] = pem_get_enter_thermal_state_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_EXIT_THERMAL_STATE] = pem_get_exit_thermal_state_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENABLE_POWER_PLAY] = pem_get_enable_powerplay_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_DISABLE_POWER_PLAY] = pem_get_disable_powerplay_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENABLE_OVER_DRIVE_TEST] = pem_get_enable_overdrive_test_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_DISABLE_OVER_DRIVE_TEST] = pem_get_disable_overdrive_test_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENABLE_GFX_CLOCK_GATING] = pem_get_enable_gfx_clock_gating_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_DISABLE_GFX_CLOCK_GATING] = pem_get_disable_gfx_clock_gating_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENABLE_CGPG] = pem_get_enable_cgpg_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_DISABLE_CGPG] = pem_get_disable_cgpg_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_COMPLETE_INIT] = pem_get_complete_init_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_SCREEN_ON] = pem_get_screen_on_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_SCREEN_OFF] = pem_get_screen_off_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_PRE_SUSPEND] = pem_get_pre_suspend_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_PRE_RESUME] = pem_get_pre_resume_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_ENABLE_USER_STATE] = pem_enable_user_state_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_READJUST_POWER_STATE] = pem_readjust_power_state_action_chain(eventmgr);
eventmgr->event_chain[AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE] = pem_display_config_change_action_chain(eventmgr);
return 0;
}
int pem_excute_event_chain(struct pp_eventmgr *eventmgr, const struct action_chain *event_chain, struct pem_event_data *event_data)
{
const pem_event_action **paction_chain;
const pem_event_action *psub_chain;
int tmp_result = 0;
int result = 0;
if (eventmgr == NULL || event_chain == NULL || event_data == NULL)
return -EINVAL;
for (paction_chain = event_chain->action_chain; NULL != *paction_chain; paction_chain++) {
if (0 != result)
return result;
for (psub_chain = *paction_chain; NULL != *psub_chain; psub_chain++) {
tmp_result = (*psub_chain)(eventmgr, event_data);
if (0 == result)
result = tmp_result;
}
}
return result;
}
const struct action_chain *pem_get_suspend_action_chain(struct pp_eventmgr *eventmgr)
{
return &suspend_action_chain;
}
const struct action_chain *pem_get_initialize_action_chain(struct pp_eventmgr *eventmgr)
{
return &initialize_action_chain;
}
const struct action_chain *pem_get_uninitialize_action_chain(struct pp_eventmgr *eventmgr)
{
return &uninitialize_action_chain;
}
const struct action_chain *pem_get_power_source_change_action_chain(struct pp_eventmgr *eventmgr)
{
return &power_source_change_action_chain_pp_enabled; /* other case base on feature info*/
}
const struct action_chain *pem_get_resume_action_chain(struct pp_eventmgr *eventmgr)
{
return &resume_action_chain;
}
const struct action_chain *pem_get_hibernate_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_thermal_notification_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_vbios_notification_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_enter_thermal_state_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_exit_thermal_state_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_enable_powerplay_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_disable_powerplay_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_enable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_disable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_enable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr)
{
return &enable_gfx_clock_gating_action_chain;
}
const struct action_chain *pem_get_disable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr)
{
return &disable_gfx_clock_gating_action_chain;
}
const struct action_chain *pem_get_enable_cgpg_action_chain(struct pp_eventmgr *eventmgr)
{
return &enable_cgpg_action_chain;
}
const struct action_chain *pem_get_disable_cgpg_action_chain(struct pp_eventmgr *eventmgr)
{
return &disable_cgpg_action_chain;
}
const struct action_chain *pem_get_complete_init_action_chain(struct pp_eventmgr *eventmgr)
{
return &complete_init_action_chain;
}
const struct action_chain *pem_get_screen_on_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_screen_off_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_pre_suspend_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_get_pre_resume_action_chain(struct pp_eventmgr *eventmgr)
{
return NULL;
}
const struct action_chain *pem_enable_user_state_action_chain(struct pp_eventmgr *eventmgr)
{
return &enable_user_state_action_chain;
}
const struct action_chain *pem_readjust_power_state_action_chain(struct pp_eventmgr *eventmgr)
{
return &readjust_power_state_action_chain;
}
const struct action_chain *pem_display_config_change_action_chain(struct pp_eventmgr *eventmgr)
{
return &display_config_change_action_chain;
}

View File

@ -0,0 +1,59 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _EVENT_MANAGEMENT_H_
#define _EVENT_MANAGEMENT_H_
#include "eventmgr.h"
int pem_init_event_action_chains(struct pp_eventmgr *eventmgr);
int pem_excute_event_chain(struct pp_eventmgr *eventmgr, const struct action_chain *event_chain, struct pem_event_data *event_data);
const struct action_chain *pem_get_suspend_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_initialize_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_uninitialize_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_power_source_change_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_resume_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_hibernate_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_thermal_notification_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_vbios_notification_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_enter_thermal_state_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_exit_thermal_state_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_enable_powerplay_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_disable_powerplay_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_enable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_disable_overdrive_test_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_enable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_disable_gfx_clock_gating_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_enable_cgpg_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_disable_cgpg_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_complete_init_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_screen_on_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_screen_off_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_pre_suspend_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_get_pre_resume_action_chain(struct pp_eventmgr *eventmgr);
extern const struct action_chain *pem_enable_user_state_action_chain(struct pp_eventmgr *eventmgr);
extern const struct action_chain *pem_readjust_power_state_action_chain(struct pp_eventmgr *eventmgr);
const struct action_chain *pem_display_config_change_action_chain(struct pp_eventmgr *eventmgr);
#endif /* _EVENT_MANAGEMENT_H_ */

View File

@ -0,0 +1,114 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include "eventmgr.h"
#include "hwmgr.h"
#include "eventinit.h"
#include "eventmanagement.h"
static int pem_init(struct pp_eventmgr *eventmgr)
{
int result = 0;
struct pem_event_data event_data;
/* Initialize PowerPlay feature info */
pem_init_feature_info(eventmgr);
/* Initialize event action chains */
pem_init_event_action_chains(eventmgr);
/* Call initialization event */
result = pem_handle_event(eventmgr, AMD_PP_EVENT_INITIALIZE, &event_data);
if (0 != result)
return result;
/* Register interrupt callback functions */
result = pem_register_interrupts(eventmgr);
return 0;
}
static void pem_fini(struct pp_eventmgr *eventmgr)
{
struct pem_event_data event_data;
pem_uninit_featureInfo(eventmgr);
pem_unregister_interrupts(eventmgr);
pem_handle_event(eventmgr, AMD_PP_EVENT_UNINITIALIZE, &event_data);
if (eventmgr != NULL)
kfree(eventmgr);
}
int eventmgr_init(struct pp_instance *handle)
{
int result = 0;
struct pp_eventmgr *eventmgr;
if (handle == NULL)
return -EINVAL;
eventmgr = kzalloc(sizeof(struct pp_eventmgr), GFP_KERNEL);
if (eventmgr == NULL)
return -ENOMEM;
eventmgr->hwmgr = handle->hwmgr;
handle->eventmgr = eventmgr;
eventmgr->platform_descriptor = &(eventmgr->hwmgr->platform_descriptor);
eventmgr->pp_eventmgr_init = pem_init;
eventmgr->pp_eventmgr_fini = pem_fini;
return result;
}
int eventmgr_fini(struct pp_eventmgr *eventmgr)
{
kfree(eventmgr);
return 0;
}
static int pem_handle_event_unlocked(struct pp_eventmgr *eventmgr, enum amd_pp_event event, struct pem_event_data *data)
{
if (eventmgr == NULL || event >= AMD_PP_EVENT_MAX || data == NULL)
return -EINVAL;
return pem_excute_event_chain(eventmgr, eventmgr->event_chain[event], data);
}
int pem_handle_event(struct pp_eventmgr *eventmgr, enum amd_pp_event event, struct pem_event_data *event_data)
{
int r = 0;
r = pem_handle_event_unlocked(eventmgr, event, event_data);
return r;
}
bool pem_is_hw_access_blocked(struct pp_eventmgr *eventmgr)
{
return (eventmgr->block_adjust_power_state || phm_is_hw_access_blocked(eventmgr->hwmgr));
}

View File

@ -0,0 +1,410 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmgr.h"
#include "eventsubchains.h"
#include "eventtasks.h"
#include "hardwaremanager.h"
const pem_event_action reset_display_phy_access_tasks[] = {
pem_task_reset_display_phys_access,
NULL
};
const pem_event_action broadcast_power_policy_tasks[] = {
/* PEM_Task_BroadcastPowerPolicyChange, */
NULL
};
const pem_event_action unregister_interrupt_tasks[] = {
pem_task_unregister_interrupts,
NULL
};
/* Disable GFX Voltage Islands Power Gating */
const pem_event_action disable_gfx_voltage_island_powergating_tasks[] = {
pem_task_disable_voltage_island_power_gating,
NULL
};
const pem_event_action disable_gfx_clockgating_tasks[] = {
pem_task_disable_gfx_clock_gating,
NULL
};
const pem_event_action block_adjust_power_state_tasks[] = {
pem_task_block_adjust_power_state,
NULL
};
const pem_event_action unblock_adjust_power_state_tasks[] = {
pem_task_unblock_adjust_power_state,
NULL
};
const pem_event_action set_performance_state_tasks[] = {
pem_task_set_performance_state,
NULL
};
const pem_event_action get_2d_performance_state_tasks[] = {
pem_task_get_2D_performance_state_id,
NULL
};
const pem_event_action conditionally_force3D_performance_state_tasks[] = {
pem_task_conditionally_force_3d_performance_state,
NULL
};
const pem_event_action process_vbios_eventinfo_tasks[] = {
/* PEM_Task_ProcessVbiosEventInfo,*/
NULL
};
const pem_event_action enable_dynamic_state_management_tasks[] = {
/* PEM_Task_ResetBAPMPolicyChangedFlag,*/
pem_task_get_boot_state_id,
pem_task_enable_dynamic_state_management,
pem_task_register_interrupts,
NULL
};
const pem_event_action enable_clock_power_gatings_tasks[] = {
pem_task_enable_clock_power_gatings_tasks,
pem_task_powerdown_uvd_tasks,
pem_task_powerdown_vce_tasks,
NULL
};
const pem_event_action setup_asic_tasks[] = {
pem_task_setup_asic,
NULL
};
const pem_event_action power_budget_tasks[] = {
/* TODO
* PEM_Task_PowerBudgetWaiverAvailable,
* PEM_Task_PowerBudgetWarningMessage,
* PEM_Task_PruneStatesBasedOnPowerBudget,
*/
NULL
};
const pem_event_action system_config_tasks[] = {
/* PEM_Task_PruneStatesBasedOnSystemConfig,*/
NULL
};
const pem_event_action conditionally_force_3d_performance_state_tasks[] = {
pem_task_conditionally_force_3d_performance_state,
NULL
};
const pem_event_action ungate_all_display_phys_tasks[] = {
/* PEM_Task_GetDisplayPhyAccessInfo */
NULL
};
const pem_event_action uninitialize_display_phy_access_tasks[] = {
/* PEM_Task_UninitializeDisplayPhysAccess, */
NULL
};
const pem_event_action disable_gfx_voltage_island_power_gating_tasks[] = {
/* PEM_Task_DisableVoltageIslandPowerGating, */
NULL
};
const pem_event_action disable_gfx_clock_gating_tasks[] = {
pem_task_disable_gfx_clock_gating,
NULL
};
const pem_event_action set_boot_state_tasks[] = {
pem_task_get_boot_state_id,
pem_task_set_boot_state,
NULL
};
const pem_event_action adjust_power_state_tasks[] = {
pem_task_notify_hw_mgr_display_configuration_change,
pem_task_adjust_power_state,
pem_task_notify_smc_display_config_after_power_state_adjustment,
pem_task_update_allowed_performance_levels,
/* to do pem_task_Enable_disable_bapm, */
NULL
};
const pem_event_action disable_dynamic_state_management_tasks[] = {
pem_task_unregister_interrupts,
pem_task_get_boot_state_id,
pem_task_disable_dynamic_state_management,
NULL
};
const pem_event_action disable_clock_power_gatings_tasks[] = {
pem_task_disable_clock_power_gatings_tasks,
NULL
};
const pem_event_action cleanup_asic_tasks[] = {
/* PEM_Task_DisableFPS,*/
pem_task_cleanup_asic,
NULL
};
const pem_event_action prepare_for_pnp_stop_tasks[] = {
/* PEM_Task_PrepareForPnpStop,*/
NULL
};
const pem_event_action set_power_source_tasks[] = {
pem_task_set_power_source,
pem_task_notify_hw_of_power_source,
NULL
};
const pem_event_action set_power_saving_state_tasks[] = {
pem_task_reset_power_saving_state,
pem_task_get_power_saving_state,
pem_task_set_power_saving_state,
/* PEM_Task_ResetODDCState,
* PEM_Task_GetODDCState,
* PEM_Task_SetODDCState,*/
NULL
};
const pem_event_action enable_disable_fps_tasks[] = {
/* PEM_Task_EnableDisableFPS,*/
NULL
};
const pem_event_action set_nbmcu_state_tasks[] = {
/* PEM_Task_NBMCUStateChange,*/
NULL
};
const pem_event_action reset_hardware_dc_notification_tasks[] = {
/* PEM_Task_ResetHardwareDCNotification,*/
NULL
};
const pem_event_action notify_smu_suspend_tasks[] = {
/* PEM_Task_NotifySMUSuspend,*/
NULL
};
const pem_event_action disable_smc_firmware_ctf_tasks[] = {
/* PEM_Task_DisableSMCFirmwareCTF,*/
NULL
};
const pem_event_action disable_fps_tasks[] = {
/* PEM_Task_DisableFPS,*/
NULL
};
const pem_event_action vari_bright_suspend_tasks[] = {
/* PEM_Task_VariBright_Suspend,*/
NULL
};
const pem_event_action reset_fan_speed_to_default_tasks[] = {
/* PEM_Task_ResetFanSpeedToDefault,*/
NULL
};
const pem_event_action power_down_asic_tasks[] = {
/* PEM_Task_DisableFPS,*/
pem_task_power_down_asic,
NULL
};
const pem_event_action disable_stutter_mode_tasks[] = {
/* PEM_Task_DisableStutterMode,*/
NULL
};
const pem_event_action set_connected_standby_tasks[] = {
/* PEM_Task_SetConnectedStandby,*/
NULL
};
const pem_event_action block_hw_access_tasks[] = {
pem_task_block_hw_access,
NULL
};
const pem_event_action unblock_hw_access_tasks[] = {
pem_task_un_block_hw_access,
NULL
};
const pem_event_action resume_connected_standby_tasks[] = {
/* PEM_Task_ResumeConnectedStandby,*/
NULL
};
const pem_event_action notify_smu_resume_tasks[] = {
/* PEM_Task_NotifySMUResume,*/
NULL
};
const pem_event_action reset_display_configCounter_tasks[] = {
pem_task_reset_display_phys_access,
NULL
};
const pem_event_action update_dal_configuration_tasks[] = {
/* PEM_Task_CheckVBlankTime,*/
NULL
};
const pem_event_action vari_bright_resume_tasks[] = {
/* PEM_Task_VariBright_Resume,*/
NULL
};
const pem_event_action notify_hw_power_source_tasks[] = {
pem_task_notify_hw_of_power_source,
NULL
};
const pem_event_action process_vbios_event_info_tasks[] = {
/* PEM_Task_ProcessVbiosEventInfo,*/
NULL
};
const pem_event_action enable_gfx_clock_gating_tasks[] = {
pem_task_enable_gfx_clock_gating,
NULL
};
const pem_event_action enable_gfx_voltage_island_power_gating_tasks[] = {
pem_task_enable_voltage_island_power_gating,
NULL
};
const pem_event_action reset_clock_gating_tasks[] = {
/* PEM_Task_ResetClockGating*/
NULL
};
const pem_event_action notify_smu_vpu_recovery_end_tasks[] = {
/* PEM_Task_NotifySmuVPURecoveryEnd,*/
NULL
};
const pem_event_action disable_vpu_cap_tasks[] = {
/* PEM_Task_DisableVPUCap,*/
NULL
};
const pem_event_action execute_escape_sequence_tasks[] = {
/* PEM_Task_ExecuteEscapesequence,*/
NULL
};
const pem_event_action notify_power_state_change_tasks[] = {
pem_task_notify_power_state_change,
NULL
};
const pem_event_action enable_cgpg_tasks[] = {
pem_task_enable_cgpg,
NULL
};
const pem_event_action disable_cgpg_tasks[] = {
pem_task_disable_cgpg,
NULL
};
const pem_event_action enable_user_2d_performance_tasks[] = {
/* PEM_Task_SetUser2DPerformanceFlag,*/
/* PEM_Task_UpdateUser2DPerformanceEnableEvents,*/
NULL
};
const pem_event_action add_user_2d_performance_state_tasks[] = {
/* PEM_Task_Get2DPerformanceTemplate,*/
/* PEM_Task_AllocateNewPowerStateMemory,*/
/* PEM_Task_CopyNewPowerStateInfo,*/
/* PEM_Task_UpdateNewPowerStateClocks,*/
/* PEM_Task_UpdateNewPowerStateUser2DPerformanceFlag,*/
/* PEM_Task_AddPowerState,*/
/* PEM_Task_ReleaseNewPowerStateMemory,*/
NULL
};
const pem_event_action delete_user_2d_performance_state_tasks[] = {
/* PEM_Task_GetCurrentUser2DPerformanceStateID,*/
/* PEM_Task_DeletePowerState,*/
/* PEM_Task_SetCurrentUser2DPerformanceStateID,*/
NULL
};
const pem_event_action disable_user_2d_performance_tasks[] = {
/* PEM_Task_ResetUser2DPerformanceFlag,*/
/* PEM_Task_UpdateUser2DPerformanceDisableEvents,*/
NULL
};
const pem_event_action enable_stutter_mode_tasks[] = {
pem_task_enable_stutter_mode,
NULL
};
const pem_event_action enable_disable_bapm_tasks[] = {
/*PEM_Task_EnableDisableBAPM,*/
NULL
};
const pem_event_action reset_boot_state_tasks[] = {
pem_task_reset_boot_state,
NULL
};
const pem_event_action create_new_user_performance_state_tasks[] = {
pem_task_create_user_performance_state,
NULL
};
const pem_event_action initialize_thermal_controller_tasks[] = {
pem_task_initialize_thermal_controller,
NULL
};
const pem_event_action uninitialize_thermal_controller_tasks[] = {
pem_task_uninitialize_thermal_controller,
NULL
};
const pem_event_action set_cpu_power_state[] = {
pem_task_set_cpu_power_state,
NULL
};

View File

@ -0,0 +1,100 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _EVENT_SUB_CHAINS_H_
#define _EVENT_SUB_CHAINS_H_
#include "eventmgr.h"
extern const pem_event_action reset_display_phy_access_tasks[];
extern const pem_event_action broadcast_power_policy_tasks[];
extern const pem_event_action unregister_interrupt_tasks[];
extern const pem_event_action disable_GFX_voltage_island_powergating_tasks[];
extern const pem_event_action disable_GFX_clockgating_tasks[];
extern const pem_event_action block_adjust_power_state_tasks[];
extern const pem_event_action unblock_adjust_power_state_tasks[];
extern const pem_event_action set_performance_state_tasks[];
extern const pem_event_action get_2D_performance_state_tasks[];
extern const pem_event_action conditionally_force3D_performance_state_tasks[];
extern const pem_event_action process_vbios_eventinfo_tasks[];
extern const pem_event_action enable_dynamic_state_management_tasks[];
extern const pem_event_action enable_clock_power_gatings_tasks[];
extern const pem_event_action conditionally_force3D_performance_state_tasks[];
extern const pem_event_action setup_asic_tasks[];
extern const pem_event_action power_budget_tasks[];
extern const pem_event_action system_config_tasks[];
extern const pem_event_action get_2d_performance_state_tasks[];
extern const pem_event_action conditionally_force_3d_performance_state_tasks[];
extern const pem_event_action ungate_all_display_phys_tasks[];
extern const pem_event_action uninitialize_display_phy_access_tasks[];
extern const pem_event_action disable_gfx_voltage_island_power_gating_tasks[];
extern const pem_event_action disable_gfx_clock_gating_tasks[];
extern const pem_event_action set_boot_state_tasks[];
extern const pem_event_action adjust_power_state_tasks[];
extern const pem_event_action disable_dynamic_state_management_tasks[];
extern const pem_event_action disable_clock_power_gatings_tasks[];
extern const pem_event_action cleanup_asic_tasks[];
extern const pem_event_action prepare_for_pnp_stop_tasks[];
extern const pem_event_action set_power_source_tasks[];
extern const pem_event_action set_power_saving_state_tasks[];
extern const pem_event_action enable_disable_fps_tasks[];
extern const pem_event_action set_nbmcu_state_tasks[];
extern const pem_event_action reset_hardware_dc_notification_tasks[];
extern const pem_event_action notify_smu_suspend_tasks[];
extern const pem_event_action disable_smc_firmware_ctf_tasks[];
extern const pem_event_action disable_fps_tasks[];
extern const pem_event_action vari_bright_suspend_tasks[];
extern const pem_event_action reset_fan_speed_to_default_tasks[];
extern const pem_event_action power_down_asic_tasks[];
extern const pem_event_action disable_stutter_mode_tasks[];
extern const pem_event_action set_connected_standby_tasks[];
extern const pem_event_action block_hw_access_tasks[];
extern const pem_event_action unblock_hw_access_tasks[];
extern const pem_event_action resume_connected_standby_tasks[];
extern const pem_event_action notify_smu_resume_tasks[];
extern const pem_event_action reset_display_configCounter_tasks[];
extern const pem_event_action update_dal_configuration_tasks[];
extern const pem_event_action vari_bright_resume_tasks[];
extern const pem_event_action notify_hw_power_source_tasks[];
extern const pem_event_action process_vbios_event_info_tasks[];
extern const pem_event_action enable_gfx_clock_gating_tasks[];
extern const pem_event_action enable_gfx_voltage_island_power_gating_tasks[];
extern const pem_event_action reset_clock_gating_tasks[];
extern const pem_event_action notify_smu_vpu_recovery_end_tasks[];
extern const pem_event_action disable_vpu_cap_tasks[];
extern const pem_event_action execute_escape_sequence_tasks[];
extern const pem_event_action notify_power_state_change_tasks[];
extern const pem_event_action enable_cgpg_tasks[];
extern const pem_event_action disable_cgpg_tasks[];
extern const pem_event_action enable_user_2d_performance_tasks[];
extern const pem_event_action add_user_2d_performance_state_tasks[];
extern const pem_event_action delete_user_2d_performance_state_tasks[];
extern const pem_event_action disable_user_2d_performance_tasks[];
extern const pem_event_action enable_stutter_mode_tasks[];
extern const pem_event_action enable_disable_bapm_tasks[];
extern const pem_event_action reset_boot_state_tasks[];
extern const pem_event_action create_new_user_performance_state_tasks[];
extern const pem_event_action initialize_thermal_controller_tasks[];
extern const pem_event_action uninitialize_thermal_controller_tasks[];
extern const pem_event_action set_cpu_power_state[];
#endif /* _EVENT_SUB_CHAINS_H_ */

View File

@ -0,0 +1,438 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmgr.h"
#include "eventinit.h"
#include "eventmanagement.h"
#include "eventmanager.h"
#include "hardwaremanager.h"
#include "eventtasks.h"
#include "power_state.h"
#include "hwmgr.h"
#include "amd_powerplay.h"
#include "psm.h"
#define TEMP_RANGE_MIN (90 * 1000)
#define TEMP_RANGE_MAX (120 * 1000)
int pem_task_update_allowed_performance_levels(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
if (pem_is_hw_access_blocked(eventmgr))
return 0;
phm_force_dpm_levels(eventmgr->hwmgr, AMD_DPM_FORCED_LEVEL_AUTO);
return 0;
}
/* eventtasks_generic.c */
int pem_task_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
struct pp_hwmgr *hwmgr;
if (pem_is_hw_access_blocked(eventmgr))
return 0;
hwmgr = eventmgr->hwmgr;
if (event_data->pnew_power_state != NULL)
hwmgr->request_ps = event_data->pnew_power_state;
if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
psm_adjust_power_state_dynamic(eventmgr, event_data->skip_state_adjust_rules);
else
psm_adjust_power_state_static(eventmgr, event_data->skip_state_adjust_rules);
return 0;
}
int pem_task_power_down_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_power_down_asic(eventmgr->hwmgr);
}
int pem_task_set_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
if (pem_is_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID))
return psm_set_states(eventmgr, &(event_data->requested_state_id));
return 0;
}
int pem_task_reset_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_update_new_power_state_clocks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_system_shutdown(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_register_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_unregister_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return pem_unregister_interrupts(eventmgr);
}
int pem_task_get_boot_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
int result;
result = psm_get_state_by_classification(eventmgr,
PP_StateClassificationFlag_Boot,
&(event_data->requested_state_id)
);
if (0 == result)
pem_set_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
else
pem_unset_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
return result;
}
int pem_task_enable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_enable_dynamic_state_management(eventmgr->hwmgr);
}
int pem_task_disable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_enable_clock_power_gatings(eventmgr->hwmgr);
}
int pem_task_powerdown_uvd_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_powerdown_uvd(eventmgr->hwmgr);
}
int pem_task_powerdown_vce_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
phm_powergate_uvd(eventmgr->hwmgr, true);
phm_powergate_vce(eventmgr->hwmgr, true);
return 0;
}
int pem_task_disable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_start_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_stop_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_setup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_setup_asic(eventmgr->hwmgr);
}
int pem_task_cleanup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_store_dal_configuration(struct pp_eventmgr *eventmgr, const struct amd_display_configuration *display_config)
{
/* TODO */
return 0;
/*phm_store_dal_configuration_data(eventmgr->hwmgr, display_config) */
}
int pem_task_notify_hw_mgr_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
if (pem_is_hw_access_blocked(eventmgr))
return 0;
return phm_display_configuration_changed(eventmgr->hwmgr);
}
int pem_task_notify_hw_mgr_pre_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return 0;
}
int pem_task_notify_smc_display_config_after_power_state_adjustment(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
if (pem_is_hw_access_blocked(eventmgr))
return 0;
return phm_notify_smc_display_config_after_ps_adjustment(eventmgr->hwmgr);
}
int pem_task_block_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
eventmgr->block_adjust_power_state = true;
/* to do PHM_ResetIPSCounter(pEventMgr->pHwMgr);*/
return 0;
}
int pem_task_unblock_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
eventmgr->block_adjust_power_state = false;
return 0;
}
int pem_task_notify_power_state_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_un_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_reset_display_phys_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_set_cpu_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_set_cpu_power_state(eventmgr->hwmgr);
}
/*powersaving*/
int pem_task_set_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_notify_hw_of_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_get_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_reset_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_set_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_set_screen_state_on(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_set_screen_state_off(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_disable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_disable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_clock_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_disable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
/* performance */
int pem_task_set_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
if (pem_is_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID))
return psm_set_states(eventmgr, &(event_data->requested_state_id));
return 0;
}
int pem_task_conditionally_force_3d_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_enable_stutter_mode(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
/* TODO */
return 0;
}
int pem_task_get_2D_performance_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
int result;
if (eventmgr->features[PP_Feature_PowerPlay].supported &&
!(eventmgr->features[PP_Feature_PowerPlay].enabled))
result = psm_get_state_by_classification(eventmgr,
PP_StateClassificationFlag_Boot,
&(event_data->requested_state_id));
else if (eventmgr->features[PP_Feature_User2DPerformance].enabled)
result = psm_get_state_by_classification(eventmgr,
PP_StateClassificationFlag_User2DPerformance,
&(event_data->requested_state_id));
else
result = psm_get_ui_state(eventmgr, PP_StateUILabel_Performance,
&(event_data->requested_state_id));
if (0 == result)
pem_set_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
else
pem_unset_event_data_valid(event_data->valid_fields, PEM_EventDataValid_RequestedStateID);
return result;
}
int pem_task_create_user_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
struct pp_power_state *state;
int table_entries;
struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
int i;
table_entries = hwmgr->num_ps;
state = hwmgr->ps;
restart_search:
for (i = 0; i < table_entries; i++) {
if (state->classification.ui_label & event_data->requested_ui_label) {
event_data->pnew_power_state = state;
return 0;
}
state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
}
switch (event_data->requested_ui_label) {
case PP_StateUILabel_Battery:
case PP_StateUILabel_Balanced:
event_data->requested_ui_label = PP_StateUILabel_Performance;
goto restart_search;
default:
break;
}
return -1;
}
int pem_task_initialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
struct PP_TemperatureRange range;
range.max = TEMP_RANGE_MAX;
range.min = TEMP_RANGE_MIN;
if (eventmgr == NULL || eventmgr->platform_descriptor == NULL)
return -EINVAL;
if (phm_cap_enabled(eventmgr->platform_descriptor->platformCaps, PHM_PlatformCaps_ThermalController))
return phm_start_thermal_controller(eventmgr->hwmgr, &range);
return 0;
}
int pem_task_uninitialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data)
{
return phm_stop_thermal_controller(eventmgr->hwmgr);
}

View File

@ -0,0 +1,88 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _EVENT_TASKS_H_
#define _EVENT_TASKS_H_
#include "eventmgr.h"
struct amd_display_configuration;
/* eventtasks_generic.c */
int pem_task_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_power_down_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_get_boot_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_set_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_reset_boot_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_update_new_power_state_clocks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_system_shutdown(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_register_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_unregister_interrupts(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_disable_dynamic_state_management(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_powerdown_uvd_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_powerdown_vce_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_disable_clock_power_gatings_tasks(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_start_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_stop_asic_block_usage(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_setup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_cleanup_asic(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_store_dal_configuration (struct pp_eventmgr *eventmgr, const struct amd_display_configuration *display_config);
int pem_task_notify_hw_mgr_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_notify_hw_mgr_pre_display_configuration_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_block_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_unblock_adjust_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_notify_power_state_change(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_un_block_hw_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_reset_display_phys_access(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_set_cpu_power_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_notify_smc_display_config_after_power_state_adjustment(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
/*powersaving*/
int pem_task_set_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_notify_hw_of_power_source(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_get_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_reset_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_set_power_saving_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_set_screen_state_on(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_set_screen_state_off(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_disable_voltage_island_power_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_disable_cgpg(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_disable_gfx_clock_gating(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_enable_stutter_mode(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
/* performance */
int pem_task_set_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_conditionally_force_3d_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_get_2D_performance_state_id(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_create_user_performance_state(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_update_allowed_performance_levels(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
/*thermal */
int pem_task_initialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
int pem_task_uninitialize_thermal_controller(struct pp_eventmgr *eventmgr, struct pem_event_data *event_data);
#endif /* _EVENT_TASKS_H_ */

View File

@ -0,0 +1,117 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "psm.h"
int psm_get_ui_state(struct pp_eventmgr *eventmgr, enum PP_StateUILabel ui_label, unsigned long *state_id)
{
struct pp_power_state *state;
int table_entries;
struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
int i;
table_entries = hwmgr->num_ps;
state = hwmgr->ps;
for (i = 0; i < table_entries; i++) {
if (state->classification.ui_label & ui_label) {
*state_id = state->id;
return 0;
}
state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
}
return -1;
}
int psm_get_state_by_classification(struct pp_eventmgr *eventmgr, enum PP_StateClassificationFlag flag, unsigned long *state_id)
{
struct pp_power_state *state;
int table_entries;
struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
int i;
table_entries = hwmgr->num_ps;
state = hwmgr->ps;
for (i = 0; i < table_entries; i++) {
if (state->classification.flags & flag) {
*state_id = state->id;
return 0;
}
state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
}
return -1;
}
int psm_set_states(struct pp_eventmgr *eventmgr, unsigned long *state_id)
{
struct pp_power_state *state;
int table_entries;
struct pp_hwmgr *hwmgr = eventmgr->hwmgr;
int i;
table_entries = hwmgr->num_ps;
state = hwmgr->ps;
for (i = 0; i < table_entries; i++) {
if (state->id == *state_id) {
hwmgr->request_ps = state;
return 0;
}
state = (struct pp_power_state *)((unsigned long)state + hwmgr->ps_size);
}
return -1;
}
int psm_adjust_power_state_dynamic(struct pp_eventmgr *eventmgr, bool skip)
{
struct pp_power_state *pcurrent;
struct pp_power_state *requested;
struct pp_hwmgr *hwmgr;
bool equal;
if (skip)
return 0;
hwmgr = eventmgr->hwmgr;
pcurrent = hwmgr->current_ps;
requested = hwmgr->request_ps;
if (requested == NULL)
return 0;
if (pcurrent == NULL || (0 != phm_check_states_equal(hwmgr, &pcurrent->hardware, &requested->hardware, &equal)))
equal = false;
if (!equal || phm_check_smc_update_required_for_display_configuration(hwmgr)) {
phm_apply_state_adjust_rules(hwmgr, requested, pcurrent);
phm_set_power_state(hwmgr, &pcurrent->hardware, &requested->hardware);
hwmgr->current_ps = requested;
}
return 0;
}
int psm_adjust_power_state_static(struct pp_eventmgr *eventmgr, bool skip)
{
return 0;
}

View File

@ -0,0 +1,38 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "eventmgr.h"
#include "eventinit.h"
#include "eventmanagement.h"
#include "eventmanager.h"
#include "power_state.h"
#include "hardwaremanager.h"
int psm_get_ui_state(struct pp_eventmgr *eventmgr, enum PP_StateUILabel ui_label, unsigned long *state_id);
int psm_get_state_by_classification(struct pp_eventmgr *eventmgr, enum PP_StateClassificationFlag flag, unsigned long *state_id);
int psm_set_states(struct pp_eventmgr *eventmgr, unsigned long *state_id);
int psm_adjust_power_state_dynamic(struct pp_eventmgr *eventmgr, bool skip);
int psm_adjust_power_state_static(struct pp_eventmgr *eventmgr, bool skip);

View File

@ -0,0 +1,15 @@
#
# Makefile for the 'hw manager' sub-component of powerplay.
# It provides the hardware management services for the driver.
HARDWARE_MGR = hwmgr.o processpptables.o functiontables.o \
hardwaremanager.o pp_acpi.o cz_hwmgr.o \
cz_clockpowergating.o \
tonga_processpptables.o ppatomctrl.o \
tonga_hwmgr.o pppcielanes.o tonga_thermal.o\
fiji_powertune.o fiji_hwmgr.o tonga_clockpowergating.o \
fiji_clockpowergating.o fiji_thermal.o
AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))
AMD_POWERPLAY_FILES += $(AMD_PP_HWMGR)

View File

@ -0,0 +1,252 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "hwmgr.h"
#include "cz_clockpowergating.h"
#include "cz_ppsmc.h"
/* PhyID -> Status Mapping in DDI_PHY_GEN_STATUS
0 GFX0L (3:0), (27:24),
1 GFX0H (7:4), (31:28),
2 GFX1L (3:0), (19:16),
3 GFX1H (7:4), (23:20),
4 DDIL (3:0), (11: 8),
5 DDIH (7:4), (15:12),
6 DDI2L (3:0), ( 3: 0),
7 DDI2H (7:4), ( 7: 4),
*/
#define DDI_PHY_GEN_STATUS_VAL(phyID) (1 << ((3 - ((phyID & 0x07)/2))*8 + (phyID & 0x01)*4))
#define IS_PHY_ID_USED_BY_PLL(PhyID) (((0xF3 & (1 << PhyID)) & 0xFF) ? true : false)
int cz_phm_set_asic_block_gating(struct pp_hwmgr *hwmgr, enum PHM_AsicBlock block, enum PHM_ClockGateSetting gating)
{
int ret = 0;
switch (block) {
case PHM_AsicBlock_UVD_MVC:
case PHM_AsicBlock_UVD:
case PHM_AsicBlock_UVD_HD:
case PHM_AsicBlock_UVD_SD:
if (gating == PHM_ClockGateSetting_StaticOff)
ret = cz_dpm_powerdown_uvd(hwmgr);
else
ret = cz_dpm_powerup_uvd(hwmgr);
break;
case PHM_AsicBlock_GFX:
default:
break;
}
return ret;
}
bool cz_phm_is_safe_for_asic_block(struct pp_hwmgr *hwmgr, const struct pp_hw_power_state *state, enum PHM_AsicBlock block)
{
return true;
}
int cz_phm_enable_disable_gfx_power_gating(struct pp_hwmgr *hwmgr, bool enable)
{
return 0;
}
int cz_phm_smu_power_up_down_pcie(struct pp_hwmgr *hwmgr, uint32_t target, bool up, uint32_t args)
{
/* TODO */
return 0;
}
int cz_phm_initialize_display_phy_access(struct pp_hwmgr *hwmgr, bool initialize, bool accesshw)
{
/* TODO */
return 0;
}
int cz_phm_get_display_phy_access_info(struct pp_hwmgr *hwmgr)
{
/* TODO */
return 0;
}
int cz_phm_gate_unused_display_phys(struct pp_hwmgr *hwmgr)
{
/* TODO */
return 0;
}
int cz_phm_ungate_all_display_phys(struct pp_hwmgr *hwmgr)
{
/* TODO */
return 0;
}
static int cz_tf_uvd_power_gating_initialize(struct pp_hwmgr *hwmgr, void *pInput, void *pOutput, void *pStorage, int Result)
{
return 0;
}
static int cz_tf_vce_power_gating_initialize(struct pp_hwmgr *hwmgr, void *pInput, void *pOutput, void *pStorage, int Result)
{
return 0;
}
int cz_enable_disable_uvd_dpm(struct pp_hwmgr *hwmgr, bool enable)
{
struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
uint32_t dpm_features = 0;
if (enable &&
phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_UVDDPM)) {
cz_hwmgr->dpm_flags |= DPMFlags_UVD_Enabled;
dpm_features |= UVD_DPM_MASK;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_EnableAllSmuFeatures, dpm_features);
} else {
dpm_features |= UVD_DPM_MASK;
cz_hwmgr->dpm_flags &= ~DPMFlags_UVD_Enabled;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_DisableAllSmuFeatures, dpm_features);
}
return 0;
}
int cz_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable)
{
struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
uint32_t dpm_features = 0;
if (enable && phm_cap_enabled(
hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_VCEDPM)) {
cz_hwmgr->dpm_flags |= DPMFlags_VCE_Enabled;
dpm_features |= VCE_DPM_MASK;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_EnableAllSmuFeatures, dpm_features);
} else {
dpm_features |= VCE_DPM_MASK;
cz_hwmgr->dpm_flags &= ~DPMFlags_VCE_Enabled;
smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
PPSMC_MSG_DisableAllSmuFeatures, dpm_features);
}
return 0;
}
int cz_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
{
struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
if (cz_hwmgr->uvd_power_gated == bgate)
return 0;
cz_hwmgr->uvd_power_gated = bgate;
if (bgate) {
cgs_set_clockgating_state(hwmgr->device,
AMD_IP_BLOCK_TYPE_UVD,
AMD_CG_STATE_UNGATE);
cgs_set_powergating_state(hwmgr->device,
AMD_IP_BLOCK_TYPE_UVD,
AMD_PG_STATE_GATE);
cz_dpm_update_uvd_dpm(hwmgr, true);
cz_dpm_powerdown_uvd(hwmgr);
} else {
cz_dpm_powerup_uvd(hwmgr);
cgs_set_clockgating_state(hwmgr->device,
AMD_IP_BLOCK_TYPE_UVD,
AMD_PG_STATE_GATE);
cgs_set_powergating_state(hwmgr->device,
AMD_IP_BLOCK_TYPE_UVD,
AMD_CG_STATE_UNGATE);
cz_dpm_update_uvd_dpm(hwmgr, false);
}
return 0;
}
int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
{
struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_VCEPowerGating)) {
if (cz_hwmgr->vce_power_gated != bgate) {
if (bgate) {
cgs_set_clockgating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_CG_STATE_UNGATE);
cgs_set_powergating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_PG_STATE_GATE);
cz_enable_disable_vce_dpm(hwmgr, false);
/* TODO: to figure out why vce can't be poweroff*/
cz_hwmgr->vce_power_gated = true;
} else {
cz_dpm_powerup_vce(hwmgr);
cz_hwmgr->vce_power_gated = false;
cgs_set_clockgating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_PG_STATE_GATE);
cgs_set_powergating_state(
hwmgr->device,
AMD_IP_BLOCK_TYPE_VCE,
AMD_CG_STATE_UNGATE);
cz_dpm_update_vce_dpm(hwmgr);
cz_enable_disable_vce_dpm(hwmgr, true);
return 0;
}
}
} else {
cz_dpm_update_vce_dpm(hwmgr);
cz_enable_disable_vce_dpm(hwmgr, true);
return 0;
}
if (!cz_hwmgr->vce_power_gated)
cz_dpm_update_vce_dpm(hwmgr);
return 0;
}
static struct phm_master_table_item cz_enable_clock_power_gatings_list[] = {
/*we don't need an exit table here, because there is only D3 cold on Kv*/
{ phm_cf_want_uvd_power_gating, cz_tf_uvd_power_gating_initialize },
{ phm_cf_want_vce_power_gating, cz_tf_vce_power_gating_initialize },
/* to do { NULL, cz_tf_xdma_power_gating_enable }, */
{ NULL, NULL }
};
struct phm_master_table_header cz_phm_enable_clock_power_gatings_master = {
0,
PHM_MasterTableFlag_None,
cz_enable_clock_power_gatings_list
};

View File

@ -0,0 +1,37 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _CZ_CLOCK_POWER_GATING_H_
#define _CZ_CLOCK_POWER_GATING_H_
#include "cz_hwmgr.h"
#include "pp_asicblocks.h"
extern int cz_phm_set_asic_block_gating(struct pp_hwmgr *hwmgr, enum PHM_AsicBlock block, enum PHM_ClockGateSetting gating);
extern struct phm_master_table_header cz_phm_enable_clock_power_gatings_master;
extern struct phm_master_table_header cz_phm_disable_clock_power_gatings_master;
extern int cz_dpm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate);
extern int cz_dpm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate);
extern int cz_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable);
extern int cz_enable_disable_uvd_dpm(struct pp_hwmgr *hwmgr, bool enable);
#endif /* _CZ_CLOCK_POWER_GATING_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,326 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _CZ_HWMGR_H_
#define _CZ_HWMGR_H_
#include "cgs_common.h"
#include "ppatomctrl.h"
#define CZ_NUM_NBPSTATES 4
#define CZ_NUM_NBPMEMORYCLOCK 2
#define MAX_DISPLAY_CLOCK_LEVEL 8
#define CZ_AT_DFLT 30
#define CZ_MAX_HARDWARE_POWERLEVELS 8
#define PPCZ_VOTINGRIGHTSCLIENTS_DFLT0 0x3FFFC102
#define CZ_MIN_DEEP_SLEEP_SCLK 800
/* Carrizo device IDs */
#define DEVICE_ID_CZ_9870 0x9870
#define DEVICE_ID_CZ_9874 0x9874
#define DEVICE_ID_CZ_9875 0x9875
#define DEVICE_ID_CZ_9876 0x9876
#define DEVICE_ID_CZ_9877 0x9877
#define PHMCZ_WRITE_SMC_REGISTER(device, reg, value) \
cgs_write_ind_register(device, CGS_IND_REG__SMC, ix##reg, value)
struct cz_dpm_entry {
uint32_t soft_min_clk;
uint32_t hard_min_clk;
uint32_t soft_max_clk;
uint32_t hard_max_clk;
};
struct cz_sys_info {
uint32_t bootup_uma_clock;
uint32_t bootup_engine_clock;
uint32_t dentist_vco_freq;
uint32_t nb_dpm_enable;
uint32_t nbp_memory_clock[CZ_NUM_NBPMEMORYCLOCK];
uint32_t nbp_n_clock[CZ_NUM_NBPSTATES];
uint16_t nbp_voltage_index[CZ_NUM_NBPSTATES];
uint32_t display_clock[MAX_DISPLAY_CLOCK_LEVEL];
uint16_t bootup_nb_voltage_index;
uint8_t htc_tmp_lmt;
uint8_t htc_hyst_lmt;
uint32_t system_config;
uint32_t uma_channel_number;
};
#define MAX_DISPLAYPHY_IDS 0x8
#define DISPLAYPHY_LANEMASK 0xF
#define UNKNOWN_TRANSMITTER_PHY_ID (-1)
#define DISPLAYPHY_PHYID_SHIFT 24
#define DISPLAYPHY_LANESELECT_SHIFT 16
#define DISPLAYPHY_RX_SELECT 0x1
#define DISPLAYPHY_TX_SELECT 0x2
#define DISPLAYPHY_CORE_SELECT 0x4
#define DDI_POWERGATING_ARG(phyID, lanemask, rx, tx, core) \
(((uint32_t)(phyID))<<DISPLAYPHY_PHYID_SHIFT | \
((uint32_t)(lanemask))<<DISPLAYPHY_LANESELECT_SHIFT | \
((rx) ? DISPLAYPHY_RX_SELECT : 0) | \
((tx) ? DISPLAYPHY_TX_SELECT : 0) | \
((core) ? DISPLAYPHY_CORE_SELECT : 0))
struct cz_display_phy_info_entry {
uint8_t phy_present;
uint8_t active_lane_mapping;
uint8_t display_config_type;
uint8_t active_number_of_lanes;
};
#define CZ_MAX_DISPLAYPHY_IDS 10
struct cz_display_phy_info {
bool display_phy_access_initialized;
struct cz_display_phy_info_entry entries[CZ_MAX_DISPLAYPHY_IDS];
};
struct cz_power_level {
uint32_t engineClock;
uint8_t vddcIndex;
uint8_t dsDividerIndex;
uint8_t ssDividerIndex;
uint8_t allowGnbSlow;
uint8_t forceNBPstate;
uint8_t display_wm;
uint8_t vce_wm;
uint8_t numSIMDToPowerDown;
uint8_t hysteresis_up;
uint8_t rsv[3];
};
struct cz_uvd_clocks {
uint32_t vclk;
uint32_t dclk;
uint32_t vclk_low_divider;
uint32_t vclk_high_divider;
uint32_t dclk_low_divider;
uint32_t dclk_high_divider;
};
enum cz_pstate_previous_action {
DO_NOTHING = 1,
FORCE_HIGH,
CANCEL_FORCE_HIGH
};
struct pp_disable_nb_ps_flags {
union {
struct {
uint32_t entry : 1;
uint32_t display : 1;
uint32_t driver: 1;
uint32_t vce : 1;
uint32_t uvd : 1;
uint32_t acp : 1;
uint32_t reserved: 26;
} bits;
uint32_t u32All;
};
};
struct cz_power_state {
unsigned int magic;
uint32_t level;
struct cz_uvd_clocks uvd_clocks;
uint32_t evclk;
uint32_t ecclk;
uint32_t samclk;
uint32_t acpclk;
bool need_dfs_bypass;
uint32_t nbps_flags;
uint32_t bapm_flags;
uint8_t dpm_0_pg_nb_ps_low;
uint8_t dpm_0_pg_nb_ps_high;
uint8_t dpm_x_nb_ps_low;
uint8_t dpm_x_nb_ps_high;
enum cz_pstate_previous_action action;
struct cz_power_level levels[CZ_MAX_HARDWARE_POWERLEVELS];
struct pp_disable_nb_ps_flags disable_nb_ps_flag;
};
#define DPMFlags_SCLK_Enabled 0x00000001
#define DPMFlags_UVD_Enabled 0x00000002
#define DPMFlags_VCE_Enabled 0x00000004
#define DPMFlags_ACP_Enabled 0x00000008
#define DPMFlags_ForceHighestValid 0x40000000
#define DPMFlags_Debug 0x80000000
#define SMU_EnabledFeatureScoreboard_AcpDpmOn 0x00000001 /* bit 0 */
#define SMU_EnabledFeatureScoreboard_SclkDpmOn 0x00200000
#define SMU_EnabledFeatureScoreboard_UvdDpmOn 0x00800000 /* bit 23 */
#define SMU_EnabledFeatureScoreboard_VceDpmOn 0x01000000 /* bit 24 */
struct cc6_settings {
bool cc6_setting_changed;
bool nb_pstate_switch_disable;/* controls NB PState switch */
bool cpu_cc6_disable; /* controls CPU CState switch ( on or off) */
bool cpu_pstate_disable;
uint32_t cpu_pstate_separation_time;
};
struct cz_hwmgr {
uint32_t activity_target[CZ_MAX_HARDWARE_POWERLEVELS];
uint32_t dpm_interval;
uint32_t voltage_drop_threshold;
uint32_t voting_rights_clients;
uint32_t disable_driver_thermal_policy;
uint32_t static_screen_threshold;
uint32_t gfx_power_gating_threshold;
uint32_t activity_hysteresis;
uint32_t bootup_sclk_divider;
uint32_t gfx_ramp_step;
uint32_t gfx_ramp_delay; /* in micro-seconds */
uint32_t thermal_auto_throttling_treshold;
struct cz_sys_info sys_info;
struct cz_power_level boot_power_level;
struct cz_power_state *cz_current_ps;
struct cz_power_state *cz_requested_ps;
uint32_t mgcg_cgtt_local0;
uint32_t mgcg_cgtt_local1;
uint32_t tdr_clock; /* in 10khz unit */
uint32_t ddi_power_gating_disabled;
uint32_t disable_gfx_power_gating_in_uvd;
uint32_t disable_nb_ps3_in_battery;
uint32_t lock_nb_ps_in_uvd_play_back;
struct cz_display_phy_info display_phy_info;
uint32_t vce_slow_sclk_threshold; /* default 200mhz */
uint32_t dce_slow_sclk_threshold; /* default 300mhz */
uint32_t min_sclk_did; /* minimum sclk divider */
bool disp_clk_bypass;
bool disp_clk_bypass_pending;
uint32_t bapm_enabled;
uint32_t clock_slow_down_freq;
uint32_t skip_clock_slow_down;
uint32_t enable_nb_ps_policy;
uint32_t voltage_drop_in_dce_power_gating;
uint32_t uvd_dpm_interval;
uint32_t override_dynamic_mgpg;
uint32_t lclk_deep_enabled;
uint32_t uvd_performance;
bool video_start;
bool battery_state;
uint32_t lowest_valid;
uint32_t highest_valid;
uint32_t high_voltage_threshold;
uint32_t is_nb_dpm_enabled;
struct cc6_settings cc6_settings;
uint32_t is_voltage_island_enabled;
bool pgacpinit;
uint8_t disp_config;
/* PowerTune */
uint32_t power_containment_features;
bool cac_enabled;
bool disable_uvd_power_tune_feature;
bool enable_ba_pm_feature;
bool enable_tdc_limit_feature;
uint32_t sram_end;
uint32_t dpm_table_start;
uint32_t soft_regs_start;
uint8_t uvd_level_count;
uint8_t vce_level_count;
uint8_t acp_level_count;
uint8_t samu_level_count;
uint32_t fps_high_threshold;
uint32_t fps_low_threshold;
uint32_t dpm_flags;
struct cz_dpm_entry sclk_dpm;
struct cz_dpm_entry uvd_dpm;
struct cz_dpm_entry vce_dpm;
struct cz_dpm_entry acp_dpm;
uint8_t uvd_boot_level;
uint8_t vce_boot_level;
uint8_t acp_boot_level;
uint8_t samu_boot_level;
uint8_t uvd_interval;
uint8_t vce_interval;
uint8_t acp_interval;
uint8_t samu_interval;
uint8_t graphics_interval;
uint8_t graphics_therm_throttle_enable;
uint8_t graphics_voltage_change_enable;
uint8_t graphics_clk_slow_enable;
uint8_t graphics_clk_slow_divider;
uint32_t display_cac;
uint32_t low_sclk_interrupt_threshold;
uint32_t dram_log_addr_h;
uint32_t dram_log_addr_l;
uint32_t dram_log_phy_addr_h;
uint32_t dram_log_phy_addr_l;
uint32_t dram_log_buff_size;
bool uvd_power_gated;
bool vce_power_gated;
bool samu_power_gated;
bool acp_power_gated;
bool acp_power_up_no_dsp;
uint32_t active_process_mask;
uint32_t max_sclk_level;
uint32_t num_of_clk_entries;
};
struct pp_hwmgr;
int cz_hwmgr_init(struct pp_hwmgr *hwmgr);
int cz_dpm_powerdown_uvd(struct pp_hwmgr *hwmgr);
int cz_dpm_powerup_uvd(struct pp_hwmgr *hwmgr);
int cz_dpm_powerdown_vce(struct pp_hwmgr *hwmgr);
int cz_dpm_powerup_vce(struct pp_hwmgr *hwmgr);
int cz_dpm_update_uvd_dpm(struct pp_hwmgr *hwmgr, bool bgate);
int cz_dpm_update_vce_dpm(struct pp_hwmgr *hwmgr);
#endif /* _CZ_HWMGR_H_ */

View File

@ -0,0 +1,114 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "hwmgr.h"
#include "fiji_clockpowergating.h"
#include "fiji_ppsmc.h"
#include "fiji_hwmgr.h"
int fiji_phm_disable_clock_power_gating(struct pp_hwmgr *hwmgr)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
data->uvd_power_gated = false;
data->vce_power_gated = false;
data->samu_power_gated = false;
data->acp_power_gated = false;
return 0;
}
int fiji_phm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
if (data->uvd_power_gated == bgate)
return 0;
data->uvd_power_gated = bgate;
if (bgate)
fiji_update_uvd_dpm(hwmgr, true);
else
fiji_update_uvd_dpm(hwmgr, false);
return 0;
}
int fiji_phm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
struct phm_set_power_state_input states;
const struct pp_power_state *pcurrent;
struct pp_power_state *requested;
if (data->vce_power_gated == bgate)
return 0;
data->vce_power_gated = bgate;
pcurrent = hwmgr->current_ps;
requested = hwmgr->request_ps;
states.pcurrent_state = &(pcurrent->hardware);
states.pnew_state = &(requested->hardware);
fiji_update_vce_dpm(hwmgr, &states);
fiji_enable_disable_vce_dpm(hwmgr, !bgate);
return 0;
}
int fiji_phm_powergate_samu(struct pp_hwmgr *hwmgr, bool bgate)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
if (data->samu_power_gated == bgate)
return 0;
data->samu_power_gated = bgate;
if (bgate)
fiji_update_samu_dpm(hwmgr, true);
else
fiji_update_samu_dpm(hwmgr, false);
return 0;
}
int fiji_phm_powergate_acp(struct pp_hwmgr *hwmgr, bool bgate)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
if (data->acp_power_gated == bgate)
return 0;
data->acp_power_gated = bgate;
if (bgate)
fiji_update_acp_dpm(hwmgr, true);
else
fiji_update_acp_dpm(hwmgr, false);
return 0;
}

View File

@ -0,0 +1,35 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef _FIJI_CLOCK_POWER_GATING_H_
#define _FIJI_CLOCK_POWER_GATING_H_
#include "fiji_hwmgr.h"
#include "pp_asicblocks.h"
extern int fiji_phm_powergate_vce(struct pp_hwmgr *hwmgr, bool bgate);
extern int fiji_phm_powergate_uvd(struct pp_hwmgr *hwmgr, bool bgate);
extern int fiji_phm_powergate_samu(struct pp_hwmgr *hwmgr, bool bgate);
extern int fiji_phm_powergate_acp(struct pp_hwmgr *hwmgr, bool bgate);
extern int fiji_phm_disable_clock_power_gating(struct pp_hwmgr *hwmgr);
#endif /* _TONGA_CLOCK_POWER_GATING_H_ */

View File

@ -0,0 +1,105 @@
/*
* Copyright 2015 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef FIJI_DYN_DEFAULTS_H
#define FIJI_DYN_DEFAULTS_H
/** \file
* Volcanic Islands Dynamic default parameters.
*/
enum FIJIdpm_TrendDetection
{
FIJIAdpm_TrendDetection_AUTO,
FIJIAdpm_TrendDetection_UP,
FIJIAdpm_TrendDetection_DOWN
};
typedef enum FIJIdpm_TrendDetection FIJIdpm_TrendDetection;
/* We need to fill in the default values!!!!!!!!!!!!!!!!!!!!!!! */
/* Bit vector representing same fields as hardware register. */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT0 0x3FFFC102 /* CP_Gfx_busy ????
* HDP_busy
* IH_busy
* UVD_busy
* VCE_busy
* ACP_busy
* SAMU_busy
* SDMA enabled */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT1 0x000400 /* FE_Gfx_busy - Intended for primary usage. Rest are for flexibility. ????
* SH_Gfx_busy
* RB_Gfx_busy
* VCE_busy */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT2 0xC00080 /* SH_Gfx_busy - Intended for primary usage. Rest are for flexibility.
* FE_Gfx_busy
* RB_Gfx_busy
* ACP_busy */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT3 0xC00200 /* RB_Gfx_busy - Intended for primary usage. Rest are for flexibility.
* FE_Gfx_busy
* SH_Gfx_busy
* UVD_busy */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT4 0xC01680 /* UVD_busy
* VCE_busy
* ACP_busy
* SAMU_busy */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT5 0xC00033 /* GFX, HDP */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT6 0xC00033 /* GFX, HDP */
#define PPFIJI_VOTINGRIGHTSCLIENTS_DFLT7 0x3FFFC000 /* GFX, HDP */
/* thermal protection counter (units). */
#define PPFIJI_THERMALPROTECTCOUNTER_DFLT 0x200 /* ~19us */
/* static screen threshold unit */
#define PPFIJI_STATICSCREENTHRESHOLDUNIT_DFLT 0
/* static screen threshold */
#define PPFIJI_STATICSCREENTHRESHOLD_DFLT 0x00C8
/* gfx idle clock stop threshold */
#define PPFIJI_GFXIDLECLOCKSTOPTHRESHOLD_DFLT 0x200 /* ~19us with static screen threshold unit of 0 */
/* Fixed reference divider to use when building baby stepping tables. */
#define PPFIJI_REFERENCEDIVIDER_DFLT 4
/* ULV voltage change delay time
* Used to be delay_vreg in N.I. split for S.I.
* Using N.I. delay_vreg value as default
* ReferenceClock = 2700
* VoltageResponseTime = 1000
* VDDCDelayTime = (VoltageResponseTime * ReferenceClock) / 1600 = 1687
*/
#define PPFIJI_ULVVOLTAGECHANGEDELAY_DFLT 1687
#define PPFIJI_CGULVPARAMETER_DFLT 0x00040035
#define PPFIJI_CGULVCONTROL_DFLT 0x00007450
#define PPFIJI_TARGETACTIVITY_DFLT 30 /* 30%*/
#define PPFIJI_MCLK_TARGETACTIVITY_DFLT 10 /* 10% */
#endif

Some files were not shown because too many files have changed in this diff Show More